Nvidia CUDA: Profiler indicates memory transfer operations are not performed asynchronously - cuda

I have profiled my CUDA application and the profiling results are not as I would expect them to be.
Here's a summary of how my application works:
There are 4 streams used
The CPU loop runs around polling the state of each stream
If the stream is found to be idle, then a function is called: launch_job
This function looks liks this:
launch_job(cudaStream_t stream, ...)
{
cudaMemcpyAsync(..., stream);
cuda_process_kernel<<<grid, block, 0, stream>>>(...);
cudaError_t err = cudaGetLastError();
if(err) ...
cudaMemcpyAsync(..., stream);
}
For the first block of 4 kernel thread launches seen in the profiler screenshot, the stream is different for each time launch_job is called.
However there is no overlapping of the memory transfers or the kernel execution.
I would have expected to see at least one memory transfer overlapped with a kernel function execution, if not both memory transfers. (One is direction H2D the other is direction D2H but that was probably obvious.)
Have I fundamentally misunderstood something about the way in which streams work? Or is there some other reason why my launch_job function does not produce parallelized memory transfer and kernel function execution?

Please try this:
For each stream, do cudaMemcpyAsync(..., stream) to copy H2D.
For each stream, launch the kernels on that stream;
For each stream, do cudaMemcpyAsync(..., stream) to copy D2H.
Note you are having three for loops here. If your GPU supports, your profiler should show some overlapping among different streams.
Also, if your data is really small, say only 1 MB, you may not see much overlapping, it would be more obvious if you have 100MB data copy on each stream.

Related

several cuda_stream for one cuda kernel

I have a simple question. I am allocating n block of memory associated to a unique cuda_stream like (simplify),[It may be a very bad idea -_-]:
void *ptr = NULL;
cudaStream_t stream;
cudaMallocManaged(&ptr, size);
cudaStreamAttachMemAsync(stream, ptr);
Later in my code I am calling my kernel with 6 of this block of memory (determined by a random process). The cuda launcher takes only one stream argument
update_gpu<<<256, 256,0,???>>>(block1,block2,block3,block4,block5,block6);
??? should be a stream but which one should I pass ? I may synchronize with
cudaDeviceSynchronize()
but it may be too much, as I have a lot of block
cudaStreamSynchronize(...)
look like a solution, should I do it for five of my stream ?
Any suggestions ?
best,
++t
Attaching memory to a specific stream is an optimization, that tells the runtime that this memory does not need to be visible to any other than the specific stream.
If in doubt, just don't attach the memory to any stream, and it will be visible to all kernels.
This is particularly the right approach if you don't use streams at all (which means all kernels are launched to the default stream).
If however you want to take advantage of this optimization, by the time your kernel runs all managed memory must be either not attached at all, or attached to the specific stream that the kernel is launched to.

Running several streams (instead of threads/blocks) in parallel

I have a kernel which I want to start with the configuration "1 block x 32 threads". To increase parallelism I want to start several streams instead of running a bigger "work package" than "1 block x 32 threads". I want to use the GPU in a program where data comes from the network. I don't want to wait until a bigger "work package" is available.
The code is like:
Thread(i=0..14) {
- copy data Host -> GPU [cudaMemcpyAsync(.., stream i)]
- run kernel(stream i)
- copy data GPU -> Host [cudaMemcpyAsync(.., stream i)]
}
The real code is much more complex but I want to keep it simple (15 CPU threads use the GPU).
The code works but streams doesn't run concurrently as expected. The GTX 480 has 15 SMs where each SM has 32 shader processors. I expect that if I start the kernel 15 times, all 15 streams run in parallel, but this is not the case. I have used the Nvidia Visual Profiler and there is a maximum of 5 streams which run in parallel. Often only one stream runs. The performance is really bad.
I get the best results with a "64 block x 1024 threads" configuration. If I use instead a "32 block x 1024 threads" configuration but two streams the streams are executed one after each other and performance drops. I am using Cuda Toolkit 5.5 and Ubuntu 12.04.
Can somebody explain why this is the case and can give me some background information? Should it work better on newer GPUs? What is the best way to use the GPU in time critically applications where you don't want to buffer data? Probably this is not possible, but I am searching for techniques which bring me closer to a solution.
News:
I did some further research. The problem is the last cudaMemcpyAsync(..) (GPU->Host copy) call. If I remove it, all streams run concurrent. I think the problem is illustrated in http://on-demand.gputechconf.com/gtc-express/2011/presentations/StreamsAndConcurrencyWebinar.pdf on slide 21. They say that on Fermi there are two copy queues but this is only true for tesla and quadro cards, right? I think the problem is that the GTX 480 has only one copy queue and all copy commands (host->GPU AND GPU->host) are put in this one queue. Everything is non-blocking and the GPU->host memcopy of the first thread blocks the host->GPU memcopy calls of other threads.
Here some observations:
Thread(i=0..14) {
- copy data Host -> GPU [cudaMemcpyAsync(.., stream i)]
- run kernel(stream i)
}
-> works: streams run concurrently
Thread(i=0..14) {
- copy data Host -> GPU [cudaMemcpyAsync(.., stream i)]
- run kernel(stream i)
- sleep(10)
- copy data GPU -> Host [cudaMemcpyAsync(.., stream i)]
}
-> works: streams run concurrently
Thread(i=0..14) {
- copy data Host -> GPU [cudaMemcpyAsync(.., stream i)]
- run kernel(stream i)
- cudaStreamSynchronize(stream i)
- copy data GPU -> Host [cudaMemcpyAsync(.., stream i)]
}
-> doesn't work!!! Maybe cudaStreamSynchronize is put in the copy-queue?
Does someone knows a solution for this problem. Something like a blocking-kernel call would be cool. The last cudaMemcpyAsync() (GPU->device) should be called if the kernel has been finished.
Edit2:
Here an example to clarify my problem:
To keep it simple we have 2 streams:
Stream1:
------------
HostToGPU1
kernel1
GPUToHost1
Stream2:
------------
HostToGPU2
kernel2
GPUToHost2
The first stream is started. HostToGPU1 is executed, kernel1 is launched and GPUToHost1 is called. GPUToHost1 blocks because kernel1 is running. In the meantime Stream2 is started. HostToGPU2 is called, Cuda puts it in the queue but it can't be executed because GPUToHost1 blocks until kernel 1 has been finished. There are no data transfers in the moment. Cuda just waits for GPUToHost1. So my idea was to call GPUToHost1 when kernel1 is finished. This seams to be the reason why it works with sleep(..) because GPUToHost1 is called when the kernel has been finished. A kernel-launch which automatically blocks the CPU-thread would be cool.
GPUToHost1 is not blocking in the queue (if there are no other data transfers at the time but in my case, data transfer are not time-consuming).
Concurrent kernel execution can be most easily witnessed on linux.
For a good example and an easy test, refer to the concurrent kernels sample.
Good concurrency among kernels generally requires several things:
a device which supports concurrent kernels, so a cc 2.0 or newer device
kernels that are small enough in terms of number of blocks and other resource usage (registers, shared memory) so that multiple kernels can actually execute. Kernels with larger resource requirements will typically be observed to be running serially. This is expected behavior.
proper usage of streams to enable concurrency
In addition, concurrent kernels often implies copy/compute overlap. In order for copy/compute overlap to work, you must:
be using a GPU with enough copy engines. Some GPUs have one engine, some have 2. If your GPU has one engine, you can overlap one copy operation (ie. one direction) with kernel execution. if you have 2 copy engines (your GeForce GPU has 1) you can overlap both directions of copying with kernel execution.
use pinned (host) memory for any data that will be copied to or from the GPU global memory, that will be the target (to or from) for any of the copy operations you intend to overlap
Use streams properly and the necessary async versions of the relevant api calls (e.g. cudaMemcpyAsync
Regarding your observation that the smaller 32x1024 kernels do not execute concurrently, this is likely a resource issue (blocks, registers, shared memory) preventing much overlap. If you have enough blocks in the first kernel to occupy the GPU execution resources, it's not sensible to expect additional kernels to begin executing until the first kernel is finished or mostly finished.
EDIT: Responding to question edits and additional comments below.
Yes, GTX480 has only one copy "queue" (I mentioned this explicitly in my answer, but I called it a a copy "engine"). You will only be able to get one cudaMemcpy... operation to run at any given time, and therefore only one direction (H2D or D2H) can actually be moving data at any given time, and you will only see one cudaMemcpy... operation overlap with any given kernel. And cudaStreamSynchronize causes the stream to wait until ALL CUDA operations previously issued to that stream are completed.
Note that the cudaStreamSynchronize you have in your last example should not be necessary, I don't think. Streams have 2 execution characteristics:
cuda operations (API calls, kernel calls, everything) issued to the same stream will always execute sequentially, regardless of your use of the Async API or any other considerations.
cuda operations issued to separate streams, assuming all the necessary requirements have been met, will execute asynchronously to each other.
Due to item 1, in your last case, your final "copy Data GPU->Host" operation will not begin until the previous kernel call issued to that stream is complete, even without the cudaStreamSynchronize call. So I think you can get rid of that call, i.e the 2nd case you have listed should be no different than the final case, and in the 2nd case you should not need the sleep operation either. The cudaMemcpy... issued to the same stream will not begin until all previous cuda activity in that stream is finished. This is a characteristic of streams.
EDIT2: I'm not sure we're making any progress here. The issue you pointed out in the GTC preso here (slide 21) is a valid issue, but you can't work around it by inserting additional synchronization operations, nor would a "blocking kernel" help you with that, nor is it a function of having one copy engine or 2. If you want to issue kernels in separate streams but issued in sequence with no other intervening cuda operations, then that hazard exists. The solution for this, as pointed out on the next slide, is to not issue the kernels sequentially, which is roughly comparable to your 2nd case. I'll state this again:
you have identified that your case 2 gives good concurrency
the sleep operation in that case is not needed for data integrity
If you want to provide a short sample code that demonstrates the issue, perhaps other discoveries can be made.

Synchronizing two CUDA streams

I’m using CUDA streams to enable asynchronous data transfers and hide memory copy latency. I have 2 CPU threads and 2 CUDA streams: one is “data” stream which is essentially a sequence of cudaMemcpyAsync calls initiated by first CPU thread and the other is “compute” stream which executes compute kernels. Data stream is preparing batches for compute stream so it is critical for compute stream to ensure that the batch which the stream is going to work on is completely loaded into memory.
Should I use CUDA events for such synchronization or some other mechanism?
Update: let me clarify why I cannot use separate streams with data copies/computation in each stream. The problem is that the batches must be processed in order that is, I cannot execute them in parallel (which, of course, would have been possible to do with multiple streams). However, when processing each batch, I can pre-load data for the next batch thus hiding data transfers.
To use Robert’s example:
cudaMemcpyAsync( <data for batch1>, dataStream);
cudaMemcpyAsync( <data for batch2>, dataStream);
kernelForBatch1<<<..., opsStream>>>(...);
kernelForBatch2<<<..., opsStream>>>(...);
You can certainly use cuda events to synchronize streams, such as using the cudaStreamWaitEvent API function. However the idea of putting all data copies in one stream and all kernel calls in another may not be a sensible use of streams.
cuda functions (API calls, kernel calls) issued within a single stream are guaranteed to be executed in order, with any cuda function in that stream not beginning until all previous cuda activity in that stream has completed (even if you are using calls such as cudaMemcpyAsync...)
So streams already give you a mechanism to ensure that a kernel call will not begin until the necessary data has been copied for it. Just put that kernel call in the same stream, after the data copy.
Something like this should take care of your synchronization:
cudaMemcpyAsync( <data for kernel1>, stream1);
cudaMemcpyAsync( <data for kernel2>, stream2);
kernel1<<<..., stream1>>>(...);
kernel2<<<..., stream2>>>(...);
cudaMemcpyAsync( <data from kernel1>, stream1);
cudaMemcpyAsync( <data from kernel2>, stream2);
All of the above calls are asynchronous, so assuming you've met the other requirements for asynchronous execution (such as using pinned memory), all of the above calls should "queue up" and return immediately. However kernel1 is guaranteed not to begin before the preceding cudaMemcpyAsync issued to stream1 has completed, and likewise for kernel2 and the data transfer in stream2.
I don't see any reason to break the above activity into separate CPU threads either. That unnecessarily complicates things. The most trouble-free way to manage a single device is from a single CPU thread.

When to call cudaDeviceSynchronize?

when is calling to the cudaDeviceSynchronize function really needed?.
As far as I understand from the CUDA documentation, CUDA kernels are asynchronous, so it seems that we should call cudaDeviceSynchronize after each kernel launch. However, I have tried the same code (training neural networks) with and without any cudaDeviceSynchronize, except one before the time measurement. I have found that I get the same result but with a speed up between 7-12x (depending on the matrix sizes).
So, the question is if there are any reasons to use cudaDeviceSynchronize apart of time measurement.
For example:
Is it needed before copying data from the GPU back to the host with cudaMemcpy?
If I do matrix multiplications like
C = A * B
D = C * F
should I put cudaDeviceSynchronize between both?
From my experiment It seems that I don't.
Why does cudaDeviceSynchronize slow the program so much?
Although CUDA kernel launches are asynchronous, all GPU-related tasks placed in one stream (which is the default behavior) are executed sequentially.
So, for example,
kernel1<<<X,Y>>>(...); // kernel start execution, CPU continues to next statement
kernel2<<<X,Y>>>(...); // kernel is placed in queue and will start after kernel1 finishes, CPU continues to next statement
cudaMemcpy(...); // CPU blocks until memory is copied, memory copy starts only after kernel2 finishes
So in your example, there is no need for cudaDeviceSynchronize. However, it might be useful for debugging to detect which of your kernel has caused an error (if there is any).
cudaDeviceSynchronize may cause some slowdown, but 7-12x seems too much. Might be there is some problem with time measurement, or maybe the kernels are really fast, and the overhead of explicit synchronization is huge relative to actual computation time.
One situation where using cudaDeviceSynchronize() is appropriate would be when you have several cudaStreams running, and you would like to have them exchange some information. A real-life case of this is parallel tempering in quantum Monte Carlo simulations. In this case, we would want to ensure that every stream has finished running some set of instructions and gotten some results before they start passing messages to each other, or we would end up passing garbage information. The reason using this command slows the program so much is that cudaDeviceSynchronize() forces the program to wait for all previously issued commands in all streams on the device to finish before continuing (from the CUDA C Programming Guide). As you said, kernel execution is normally asynchronous, so while the GPU device is executing your kernel the CPU can continue to work on some other commands, issue more instructions to the device, etc., instead of waiting. However when you use this synchronization command, the CPU is instead forced to idle until all the GPU work has completed before doing anything else. This behaviour is useful when debugging, since you may have a segfault occuring at seemingly "random" times because of the asynchronous execution of device code (whether in one stream or many). cudaDeviceSynchronize() will force the program to ensure the stream(s)'s kernels/memcpys are complete before continuing, which can make it easier to find out where the illegal accesses are occuring (since the failure will show up during the sync).
When you want your GPU to start processing some data, you typically do a kernal invocation.
When you do so, your device (The GPU) will start to doing whatever it is you told it to do. However, unlike a normal sequential program on your host (The CPU) will continue to execute the next lines of code in your program. cudaDeviceSynchronize makes the host (The CPU) wait until the device (The GPU) have finished executing ALL the threads you have started, and thus your program will continue as if it was a normal sequential program.
In small simple programs you would typically use cudaDeviceSynchronize, when you use the GPU to make computations, to avoid timing mismatches between the CPU requesting the result and the GPU finising the computation. To use cudaDeviceSynchronize makes it alot easier to code your program, but there is one major drawback: Your CPU is idle all the time, while the GPU makes the computation. Therefore, in high-performance computing, you often strive towards having your CPU making computations while it wait for the GPU to finish.
You might also need to call cudaDeviceSynchronize() after launching kernels from kernels (Dynamic Parallelism).
From this post CUDA Dynamic Parallelism API and Principles:
If the parent kernel needs results computed by the child kernel to do its own work, it must ensure that the child grid has finished execution before continuing by explicitly synchronizing using cudaDeviceSynchronize(void). This function waits for completion of all grids previously launched by the thread block from which it has been called. Because of nesting, it also ensures that any descendants of grids launched by the thread block have completed.
...
Note that the view of global memory is not consistent when the kernel launch construct is executed. That means that in the following code example, it is not defined whether the child kernel reads and prints the value 1 or 2. To avoid race conditions, memory which can be read by the child should not be written by the parent after kernel launch but before explicit synchronization.
__device__ int v = 0;
__global__ void child_k(void) {
printf("v = %d\n", v);
}
__global__ void parent_k(void) {
v = 1;
child_k <<< 1, 1 >>>> ();
v = 2; // RACE CONDITION
cudaDeviceSynchronize();
}

CUDA host to device transfer faster than device to host transfer

I was working on a simple cuda program in which I figured out that 90% of the time was coming from a single statement which was a cudamemcpy from device to host. The program was transfering some 2MB was data from host to device in 600-700microseconds and was copying back 4MB of data from device to host in 10ms. The total time taken by my program was 13ms. My question is that why there is an asymmetry in the two copying from device to host and host to device. Is it because cuda devlopers thought that copying back would be usually smaller in bytes. My second question is that is there any way to circumvent it.
I am using a Fermi GTX560 graphics card with 343 cores and 1GB memory.
Timing of CUDA functions is a bit different than CPU. First of all be sure that you do not take the initialization cost of CUDA into account by calling a CUDA function at the start of your application, otherwise it might be initialized while you started your timing.
int main (int argc, char **argv) {
cudaFree(0);
....//cuda is initialized..
}
Use a Cutil timer like this
unsigned int timer;
cutCreateTimer(&timer);
cutStartTimer(timer);
//your code, to assess elapsed time..
cutStopTimer(timer);
printf("Elapsed: %.3f\n", cutGetTimerValue(timer));
cutDeleteTimer(timer);
Now, after these preliminary steps lets look at the problem. When a kernel is called, the CPU part will be stalled only till the call is delivered to GPU. The GPU will continue execution while the CPU continues too. If you call cudaThreadSynchronize(..), then the CPU will stall till the GPU finishes current call. cudaMemCopy operation also requires GPU to finish its execution, because the values that should be filled by the kernel is requested.
kernel<<<numBlocks, threadPerBlock>>>(...);
cudaError_t err = cudaThreadSynchronize();
if (cudaSuccess != err) {
fprintf(stderr, "cudaCheckError() failed at %s:%i : %s.\n", __FILE__, __LINE__, cudaGetErrorString( err ) );
exit(1);
}
//now the kernel is complete..
cutStopTimer(timer);
So place a synchronization before calling the stop timer function. If you place a memory copy after the kernel call, then the elapsed time of memory copy will include some part of the kernel execution. So memCopy operation may be placed after the timing operations.
There are also some profiler counters that may be used to assess some sections of the kernels.
How to profile the number of global memory transactions for cuda kernels?
How Do You Profile & Optimize CUDA Kernels?