how to use NVidia Visual Profiler with OpenCL (on Linux)? - cuda

I'm trying to use nvvp to profile opencl kernels.
I'm running ubuntu 12.04 64b with a GTX 580 and have verified the CUDA toolkit is working fine (i can run and profile cuda code).
When trying to debug my opencl code i get:
Warning: No CUDA application was profiled, exiting
Any hints?

Nvidia's visual profiler (nvvp) can be used to profile OpenCL programs, but it is more of a pain than profiling in CUDA directly.
Simon McIntosh's High Performance Computing group over at the University of Bristol came up with the original solution (here), and I can verify it works.
I'll summarise the basics:
Firstly, the environment variable COMPUTE_PROFILE must be set, this is done with COMPUTE_PROFILE=1
Secondly a COMPUTE_PROFILE_CONFIG must be provided, a sample I use (called nvvp.cfg) contains:
profilelogformat CSV
streamid
gpustarttimestamp
gpuendtimestamp
Next to perform the actual profiling, in this case I'll profile an OpenCL application called HuffFramework, using:
COMPUTE_PROFILE=1 COMPUTE_PROFILE_CONFIG=nvvp.cfg ./HuffFramework
This then generates a series of opencl_profile_*.log files, where * is the number of threads.
These log files can't be loaded by nvvp just yet as all kernel function symbols have a leading OPENCL_ instead of an expected CUDA_, thus replace these symbols with a quick script like so:
sed 's/OPENCL_/CUDA_/g' opencl_profile_0.log > cuda_profile_0.log
Finally cuda_profile_0.log can now be imported by nvvp, by starting nvvp and going File->Import...->Command-line Profiler, point it to cuda_profile_0.log and preso!

nvvp can only profile CUDA applications.

Related

Can not profile a cuda code with nvprof when using CUPTI functions inside

I'm doing a simple experiment. Everyone may know about callback_metric sample code of CUPTI (located in CUPTI folder: /usr/local/cuda/extras/CUPTI/sample/callback_metric). It contains only a simple code for reading a metric when running a vectorAdd kernel. Everything works when I compile and run the code.
But when I run this code under nvprof command (nvprof ./callback_metric), I get an error message as:
Error: incompatible CUDA driver version
both nvprof and other CUPTI-based codes work fine separately.
The profilers are not intended to be used in this way with applications that make use of CUPTI.
This is documented in the profiler documentation:
Here are a couple of reasons why Visual Profiler may fail to gather metric or event information.
More than one tool is trying to access the GPU. To fix this issue please make sure only one tool is using the GPU at any given point. Tools include the CUDA command line profiler, Parallel NSight Analysis Tools and Graphics Tools, and applications that use either CUPTI or PerfKit API (NVPM) to read event values.

Can I profile OpenACC kernel in C source code level?

I'm trying to speed-up my code with openacc with PGI 15.7 compiler.
I want to profile my code in C source level.
I'm using 'nvvp' profiler from CUDA 7.0
When I run nvvp, I can use 'analysis tap' and can get which latency is the reason my code slows. (data dependency, conditional branch and bandwidth... etc)
But, I couldn't get line-based analysis, but only 'kernel' level analysis.
(e.g. main_300_gpu kernel used 10s) .
So I have some trouble to know where do I have to fix the code.
Is there any way to profile my code in source-level?
I'm using
PGI 15.7 (using pgcc)
CUDA 7.0
NVIDIA GTX 960
Ubuntu 14.04 LTS x86_64
[my nvvp reporting screenshots]
You can also try adding the flag "-ta=tesla:lineinfo" to have the compiler add source code association for the profiler (it's the same flag as nvcc --lineinfo). Though as Bob points out, the code may been heavily transformed so the line information many not directly correspond back to your original source.
At the current time (and on CUDA 7.5 or higher, with a cc5.2 or higher GPU), the nvvp profiler can associate various kinds of sampled execution activity with CUDA C/C++ lines of source code.
However, at the present time, this capability does not extend to OpenACC C/C++ (or Fortran) lines of source code.
It should still be possible to associate the activity with the disassembly, however, and it may be possible to associate with intermediate C source files produced by the PGI nollvm option. Niether of these will bear much resemblance to your OpenACC source code, however.
Another option for profiling OpenACC codes using PGI tools is to set the PGI_ACC_TIME=1 environment variable before executing your code. This will enable a lightweight profiler built into the runtime to give some analysis of the execution characteristics of your OpenACC code, in particular those parts associated with accelerator regions. The output is annotated so you can refer back to lines of your source code.

How to view CUDA library function calls in profiler?

I am using the cuFFT library. How do I modify my code to see the function calls from this library (or any other CUDA library) in the NVIDIA Visual Profiler NVVP? I am using Windows and Visual Studio 2013.
Below is my code. I convert my image and filter to the Fourier domain, then perform point-wise complex matrix multiplication in a custom CUDA kernel I wrote, and then simply perform the inverse DFT on the filtered images spectrum. The results are accurate, but I am not able to figure out how to view the cuFFT functions in the profiler.
// Execute FFT Plans
cufftExecR2C(fftPlanFwd, (cufftReal *)d_in, (cufftComplex *)d_img_Spectrum);
cufftExecR2C(fftPlanFwd, (cufftReal *)d_filter, (cufftComplex *)d_filter_Spectrum);
// Perform complex pointwise muliplication on filter spectrum and image spectrum
pointWise_complex_matrix_mult_kernel << <grid, block >> >(d_img_Spectrum, d_filter_Spectrum, d_filtered_Spectrum, ROWS, COLS);
// Execute FFT^-1 Plan
cufftExecC2R(fftPlanInv, (cufftComplex *)d_filtered_Spectrum, (cufftReal *)d_out);
At the entry point to the library, the library call is like any other call into a C or C++ library: it is executing on the host. Within that library call, there may be calls to CUDA kernels or other CUDA API functions, for a CUDA GPU-enabled library such as CUFFT.
The profilers (at least up through CUDA 7.0 - see note about CUDA 7.5 nvprof below) don't natively support the profiling of host code. They are primarily focused on kernel calls and CUDA API calls. A call into a library like CUFFT by itself is not considered a CUDA API call.
You haven't shown a complete profiler output, but you should see the CUFFT library make CUDA kernel calls; these will show up in the profiler output. The first two CUFFT calls prior to your pointWise_complex_matrix_mult_kernel should have one or more kernel calls each that show up to the left of that kernel, and the last CUFFT call should have one or more kernel calls that show up to the right of that kernel.
One possible way to get specific sections of host code to show up in the profiler is to use the NVTX (NVIDIA Tools Extension) library to annotate your source code, which will cause those annotations to show up in the profiler output. You might want to put an NVTX range event around the library call you wish to see identified in the profiler output.
Another approach would be to try out the new CPU profiling features in nvprof in CUDA 7.5. You can refer to section 3.4 of the Profiler guide that ships with CUDA 7.5RC.
Finally, ordinary host profilers should be able to profile your CUDA application, including CUFFT library calls, but they won't have any visibility into what is happening on the GPU.
EDIT: Based on discussion in the comments below, your code appears to be similar to the simpleCUFFT sample code. When I compile and profile that code on Win7 x64, VS 2013 Community, and CUDA 7, I get the following output (zoomed in to depict the interesting part of the timeline):
You can see that there are CUFFT kernels being called both before and after the complex pointwise multiply and scale kernel that appears in that code. My suggestion would be to start by doing something similar with the simpleCUFFT sample code rather than your own code, and see if you can duplicate the output above. If so, the problem lies in your code (perhaps your CUFFT calls are failing, perhaps you need to add proper error checking, etc.)

Running CUDA GUI samples from a passive (inactive) GPU

I managed to successfully run CUDA programs on a GeForce GTX 750 Ti while using a AMD Radeon HD 7900 as the rendering device (actually connected to the display) using this guide; for instance, the Vector Addition sample runs nicely. However, I can only run applications that do not produce visual output. For example, the Mandelbrot CUDA sample does not run and fails with an error:
Error: failed to get minimal extensions for demo:
Missing support for: GL_ARB_pixel_buffer_object
This sample requires:
OpenGL version 1.5
GL_ARB_vertex_buffer_object
GL_ARB_pixel_buffer_object
The error originates from asking glewIsSupported() for these extensions. Is there any way to run an application, like these CUDA samples, so that the CUDA operations are run on the GTX as usual but the Window is drawn on the Radeon card? I tried to convince Nsight Eclipse to run a remote debugging session, with my own PC as the remote host, but something else failed right away. Is this supposed to actually work? Could it be possible to use VirtualGL?
Some of the NVIDIA CUDA samples that involve graphics, such as the Mandelbrot sample, implement an efficient rendering strategy: they bind OpenGL data structures - Pixel Vertex Objects in the case of Mandelbrot - to the CUDA arrays containing the simulation data and render them directly from the GPU. This avoids copying the data from the device to the host at end of each iteration of the simulation, and results in a lightning fast rendering phase.
To answer your question: NVIDIA samples as they are need to run the rendering phase on the same GPU where the simulation phase is executed, otherwise, the GPU that handles the graphics would not have the data to be rendered in its memory.
This does not exclude that the samples can be modified to work with multiple GPUs. It should be possible to copy the simulation data back to the host at end of each iteration, and then render it using a custom method or even send it over the network. This would require to (1) modify the code, by separating and making independent simulation and rendering phases, and (2) accept the big loss in frame per second that would result from this.

Can GPU counters be read transparently to the application code

I am trying to profile the CUDA rodinia benchmarks executing on a GTX 650.
I am using the code /usr/local/cuda-5.0/extras/CUPTI/samples/event_sampling to read
the instructions executed counter. It seems strange that I do not see any change in the
values reported by the event_sampling whether I am executing the CUDA benchmarks or not.
The event_sampling code also has some calculations of its own for which it measures the instructions executed. Unlike CPU, do I need to make changes to the source code of the application to be able to read the GPU counters such as instruction_executed?
CUPTI will only give you counter updates for kernels in the same process. You can get some of these values, though not to the same level of precision, with the NVIDIA visual profiler or related environment variables without modifying the code however.