Recently I started to build the application which uses CUDA 8.0 on Visual Studio 2015. Because I have to use Dynamic Parallelism I had to change Code Generation into compute_35, sm_35 from compute_20, sm_20 (defualt). Since I have changed it, invoked printf() inside a Kernel does not print anything.
Do you know the way that I can use Dynamic Parallelism and print something from inside the Kernel?
Perhaps it is worth mentioning that my graphic card is GeForce GTX 760.
Your GeForce GTX 760 is of compute capability 3.0 and doesn't support dynamic parallelism.
Compiling for the virtual compute_35 architecture prevents your kernel from running at all, as the virtual architecture needs to be less or equal your device's compute capability. Thus you see no output from printf() inside the kernel.
As Robert Crovella has remarked above, you would have noticed this with proper error checking.
Related
It is possible to use nvprof to access/read bank conflicts counters for CUDA exec:
nvprof --events shared_st_bank_conflict,shared_ld_bank_conflict my_cuda_exe
However it does not work for the code that uses OpenCL rather then CUDA code.
Is there any way to extract these counters outside nvprof from OpenCL environment, maybe directly from ptx?
Alternatively is there any way to convert PTX assembly generated from nvidia OpenCL compiler using clGetProgramInfo with CL_PROGRAM_BINARIES to CUDA kernel and run it using cuModuleLoadDataEx and thus be able to use nvprof?
Is there any simulation CPU backend that allows to set such parameters as bank size etc?
Additional option:
Use converter of opencl to cuda code inlcuding features missing from CUDA like vloadn/vstoren, float16, and other various accessors. #define work only for simple kernels. Is there any tool that provides it?
Is there any way to extract these counters outside nvprof from OpenCL
environment, maybe directly from ptx?
No. Nor is there in CUDA, nor in compute shaders in OpenGL, DirectX or Vulkan.
Alternatively is there any way to convert PTX assembly generated from
nvidia OpenCL compiler using clGetProgramInfo with
CL_PROGRAM_BINARIES to CUDA kernel and run it using
cuModuleLoadDataEx and thus be able to use nvprof?
No. OpenCL PTX and CUDA PTX are not the same and can't be used interchangeably
Is there any simulation CPU backend that allows to set such parameters
as bank size etc?
Not that I am aware of.
I'm involved in a project where I have to do gpu programming, one of my constraint is to do it on a nvidia device (thus in CUDA).
But I haven't access to a device equipped with nvidia gpu.
So I would like to know if there is any wrapper that exist which could allow me to write a CUDA code but executed as an openCL code to make it work on an amd gpu ?
ps : gpuocelot could fit well IF I would not have to do it on windows system.
Is the "CUDA" constraint an actual one? Because GPU programming on NVIDIA hardware doesn't necessarily imply CUDA. You have other possible solutions such as:
OpenCL which you mentioned already, which is quite complex and cumbersome to use, but which opens you up plenty of possible back-ends.
Thrust which permits you to target NVIDIA GPUs with a CUDA back-end, or CPUs with an OpenMP and a TBB back-end.
OpenACC with the PGI compiler which permits (AFAIK) to target both NVIDIA and AMD GPUs.
If it were me and the code permitting, I would try to develop using Thrust. But that's up to you.
You could take a look at GPU Ocelot. According to its website:
Ocelot currently allows CUDA programs to be executed on NVIDIA GPUs, AMD GPUs, and x86-CPUs at full speed without recompilation.
Its mentioned that CUDA 5 allows library calls from kernel
Does that mean CUDA 5 can use thrust or STL inside device code then ?
CUDA 5 has a device code linker for the first time. It means you can have separate object files of device functions and link against them rather than having to declare them at compilation unit scope. It also adds the ability for kernels to call other kernels (but only on compute 3.5 Kepler devices).
None of this means that C++ standard library templates or Thrust can be used inside kernel code.
I'm trying to use nvvp to profile opencl kernels.
I'm running ubuntu 12.04 64b with a GTX 580 and have verified the CUDA toolkit is working fine (i can run and profile cuda code).
When trying to debug my opencl code i get:
Warning: No CUDA application was profiled, exiting
Any hints?
Nvidia's visual profiler (nvvp) can be used to profile OpenCL programs, but it is more of a pain than profiling in CUDA directly.
Simon McIntosh's High Performance Computing group over at the University of Bristol came up with the original solution (here), and I can verify it works.
I'll summarise the basics:
Firstly, the environment variable COMPUTE_PROFILE must be set, this is done with COMPUTE_PROFILE=1
Secondly a COMPUTE_PROFILE_CONFIG must be provided, a sample I use (called nvvp.cfg) contains:
profilelogformat CSV
streamid
gpustarttimestamp
gpuendtimestamp
Next to perform the actual profiling, in this case I'll profile an OpenCL application called HuffFramework, using:
COMPUTE_PROFILE=1 COMPUTE_PROFILE_CONFIG=nvvp.cfg ./HuffFramework
This then generates a series of opencl_profile_*.log files, where * is the number of threads.
These log files can't be loaded by nvvp just yet as all kernel function symbols have a leading OPENCL_ instead of an expected CUDA_, thus replace these symbols with a quick script like so:
sed 's/OPENCL_/CUDA_/g' opencl_profile_0.log > cuda_profile_0.log
Finally cuda_profile_0.log can now be imported by nvvp, by starting nvvp and going File->Import...->Command-line Profiler, point it to cuda_profile_0.log and preso!
nvvp can only profile CUDA applications.
I've been googling around and have only been able to find a trivial example of the new dynamic parallelism in Compute Capability 3.0 in one of their Tech Briefs linked from here. I'm aware that the HPC-specific cards probably won't be available until this time next year (after the nat'l labs get theirs). And yes, I realize that the simple example they gave is enough to get you going, but the more the merrier.
Are there other examples I've missed?
To save you the trouble, here is the entire example given in the tech brief:
__global__ ChildKernel(void* data){
//Operate on data
}
__global__ ParentKernel(void *data){
ChildKernel<<<16, 1>>>(data);
}
// In Host Code
ParentKernel<<<256, 64>>(data);
// Recursion is also supported
__global__ RecursiveKernel(void* data){
if(continueRecursion == true)
RecursiveKernel<<<64, 16>>>(data);
}
EDIT:
The GTC talk New Features In the CUDA Programming Model focused mostly on the new Dynamic Parallelism in CUDA 5. The link has the video and slides. Still only toy examples, but a lot more detail than the tech brief above.
Here is what you need, the Dynamic parallelism programming guide. Full of details and examples: http://docs.nvidia.com/cuda/pdf/CUDA_Dynamic_Parallelism_Programming_Guide.pdf
Just to confirm that dynamic parallelism is only supported on GPU's with a compute capability of 3.5 upwards.
I have a 3.0 GPU with cuda 5.0 installed I have compiled the Dynamic Parallelism examples
nvcc -arch=sm_30 test.cu
and received the below compile error
test.cu(10): error: calling a global function("child_launch") from a global function("parent_launch") is only allowed on the compute_35 architecture or above.
GPU info
Device 0: "GeForce GT 640"
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
hope this helps
I edited the question title to "...CUDA 5...", since Dynamic Parallelism is new in CUDA 5, not CUDA 4. We don't have any public examples available yet, because we don't have public hardware available that can run them. CUDA 5.0 will support dynamic parallelism but only on Compute Capability 3.5 and later (GK110, for example). These will be available later in the year.
We will release some examples with a CUDA 5 release candidate closer to the time the hardware is available.
I think compute capability 3.0 doesn´t include dynamic paralelism. It will be included in the GK110 architecture (aka "Big Kepler"), I don´t know what compute capability number will have assigned (3.1? maybe). Those cards won´t be available until late this year (I´m waiting sooo much for those). As far as I know the 3.0 corresponds to the GK104 chips like the GTX690 o the GT640M for laptops.
Just wanted to check in with you all given that the CUDA 5 RC was released recently. I looked in the SDK examples and wasn't able to find any dynamic parallelism there. Someone correct me if I'm wrong. I searched for kernel launches within kernels by grepping for "<<<" and found nothing.