Profiling my solution, I see dependencies between memory transfer and kernel computation. For a 60Mb data transfer, I have 2ms overhead for each overlapped kernel computation.
I'm computing my basic solution and the enhanced one (overlapped) to see the differences. They treat the same amount of data with the same kernels (which do not depend on the data value).
So am I wrong or missing something somewhere, or does the overlap really use a "significant" part of the GPU ?
I think the overlapping process must order the data transfer and control its issue and you may add the context switching. But compared to 2ms it seems to be too much ?
When you overlap data copy with compute, both operations are competing for GPU memory bandwidth. If your kernel is memory-bandwidth bound, then its possible that overlapping the operations will cause both the compute and the memory copy to run longer, than if either were running alone.
60 megabytes of data on a PCIE Gen2 link will take ~10ms of time if there is no contention. An extra 2ms when there is contention doesn't sound out-of-range to me, but it will depend to a significant degree, which GPU you are using. It's also not clear if the "overhead" you're referring to is an extension of the length of the transfer, or the kernel compute, or the overall program. Different GPUs have different GPU memory bandwidth numbers.
Related
I have recently started playing with the NVIDIA Visual Profiler (CUDA 7.5) to time my applications.
However, I don't seem to fully understand the implications of the outputs I get. I am unprepared to know how to act to different profiler outputs.
As an example: A CUDA code that calls a single Kernel ~360 times in a for loop. Each time, the kernel computes 512^2 times about 1000 3D texture memory reads. A thread is allocated per unit of 512^2. Some arithmetic is needed to know which position to read in texture memory. Texture memory read is performed without interpolation, always in the exact data index. The reason 3D texture memory has been chose is because the memreads will be relatively random, so memory coalescence is not expected. I cant find the reference for this, but definitely read it in SO somewhere.
The description is short , but I hope it gives a small overview of what operations the kernel does (posting the whole kernel would be too much, probably, but I can if required).
From now on, I will describe my interpretation of the profiler.
When profiling, if I run Examine GPU usage I get (click to enlarge):
From here I see several things:
Low Memcopy/Compute overlap 0%. This is expected, as I run a big kernel, wait until it has finished and then memcopy. There should not be overlap.
Low Kernel Concurrency 0%. I just got 1 kernel, this is expected.
Low Memcopy Overlap 0%. Same thing. I only memcopy once in the begging, and I memcopy once after each kernel. This is expected.
From the kernel executions "bars", top and right I can see:
Most of the time is running kernels. There is little memory overhead.
All kernels take the same time (good)
The biggest flag is occupancy, below 45% always, being the registers the limiters. However, optimizing occupancy doesn't seem to be always a priority.
I follow my profiling by running Perform Kernel Analysis, getting:
I can see here that
Compute and memory utilization is low in the kernel. The profiler suggests that below 60% is no good.
Most of the time is in computing and L2 cache reading.
Something else?
I continue by Perform Latency Analysis, as the profiler suggests that the biggest bottleneck is there.
The biggest 3 stall reasons seem to be
Memory dependency. Too many texture memreads? But I need this amount of memreads.
Execution dependency. "can be reduced by increasing instruction level parallelism". Does this mean that I should try to change e.g. a=a+1;a=a*a;b=b+1;b=b*b; to a=a+1;b=b+1;a=a*a;b=b*b;?
Instruction fetch (????)
Questions:
Are there more additional tests I can perform to understand better my kernels execution time limitations?
Is there a ways to profile in the instruction level inside the kernel?
Are there more conclusions one can obtain by looking at the profiling than the ones I do obtain?
If I were to start trying to optimize the kernel, where would I start?
Are there more additional tests I can perform to understand better my
kernels execution time limitations?
Of course! If you pay attention to "Properties" window. Your screenshot is telling you that your kernel 1. Is limited by register usage (check it on 'Kernel Lantency' analisys), and 2.Warp Efficiency is low (less than 100% means thread divergece) (check it on 'Divergent Execution').
Is there a ways to profile in the instruction level inside the kernel?
Yes, you have available two types of profiling:
'Kernel Profile - Instruction Execution'
'Kernel Profile - PC Sampling' (Only in Maxwell)
Are there more conclusions one can obtain by looking at the profiling
than the ones I do obtain?
You should check if your kernel has some thread divergence. Also you should check that there is no problem with shared/global memory access patterns.
If I were to start trying to optimize the kernel, where would I start?
I find the Kernel Latency window the most useful one, but I suppose it depends on the type of kernel you are analyzing.
Consumer-grade Nvidia GPUs are expected to have about 1-10 soft memory errors per week.
If you somehow manage to detect an error on a system without ECC
(e.g. if the results were abnormal) what steps are necessary and sufficient to recover from it?
Is it enough to just reload all of the data to the GPU (cuda.memcpy_htod in PyCuda),
or do you need to reboot the system? What about the "kernel", rather than data?
A soft memory error (meaning incorrect results due to noise of some kind), shouldn't require a reboot. Just rewind back to some known good position, reload data to the GPU and proceed.
Of course, it depends on what was located in the memory that was corrupted. I have accidentally overwritten memory on GPUs that required a reboot to fix, so it seems that could happen if memory is randomly corrupted as well. I think the GPU drivers reside partially in GPU memory.
For critical calculations, one can guard against soft memory errors by running the same calculation twice (including memory copies, etc) and comparing the result.
Since the compute cards with ECC are often more than twice as expensive as the graphics cards, it may be less expensive to just purchase two graphics cards and run the same calculations on both and compare all results. That has the added benefit of enabling doubling the calculation speed for non-critical calculations.
I've tested empirically for several values of block and of thread, and the execution time can be greatly reduced with specific values.
I don't see what are the differences between blocks and thread. I figure that it may be that thread in a block have specific cache memory but it's quite fuzzy for me. For the moment, I parallelize my functions in N parts, which are allocated on blocks/threads.
My goal could be to automaticaly adjust the number of blocks and thread regarding to the size of the memory that I've to use. Could it be possible? Thank you.
Hong Zhou's answer is good, so far. Here are some more details:
When using shared memory you might want to consider it first, because it's a very much limited resource and it's not unlikely for kernels to have very specific needs that constrain
those many variables controlling parallelism.
You either have blocks with many threads sharing larger regions or blocks with fewer
threads sharing smaller regions (under constant occupancy).
If your code can live with as little as 16KB of shared memory per multiprocessor
you might want to opt for larger (48KB) L1-caches calling
cudaDeviceSetCacheConfig(cudaFuncCachePreferL1);
Further, L1-caches can be disabled for non-local global access using the compiler option -Xptxas=-dlcm=cg to avoid pollution when the kernel accesses global memory carefully.
Before worrying about optimal performance based on occupancy you might also want to check
that device debugging support is turned off for CUDA >= 4.1 (or appropriate optimization options are given, read my post in this thread for a suitable compiler
configuration).
Now that we have a memory configuration and registers are actually used aggressively,
we can analyze the performance under varying occupancy:
The higher the occupancy (warps per multiprocessor) the less likely the multiprocessor will have to wait (for memory transactions or data dependencies) but the more threads must share the same L1 caches, shared memory area and register file (see CUDA Optimization Guide and also this presentation).
The ABI can generate code for a variable number of registers (more details can be found in the thread I cited). At some point, however, register spilling occurs. That is register values get temporarily stored on the (relatively slow, off-chip) local memory stack.
Watching stall reasons, memory statistics and arithmetic throughput in the profiler while
varying the launch bounds and parameters will help you find a suitable configuration.
It's theoretically possible to find optimal values from within an application, however,
having the client code adjust optimally to both different device and launch parameters
can be nontrivial and will require recompilation or different variants of the kernel to be deployed for every target device architecture.
I believe to automatically adjust the blocks and thread size is a highly difficult problem. If it is easy, CUDA would most probably have this feature for you.
The reason is because the optimal configuration is dependent of implementation and the kind of algorithm you are implementing. It requires profiling and experimenting to get the best performance.
Here are some limitations which you can consider.
Register usage in your kernel.
Occupancy of your current implementation.
Note: having more threads does not equate to best performance. Best performance is obtained by getting the right occupancy in your application and keeping the GPU cores busy all the time.
I've a quite good answer here, in a word, this is a difficult problem to compute the optimal distribution on blocks and threads.
To what degree can one predict / calculate the performance of a CUDA kernel?
Having worked a bit with CUDA, this seems non trivial.
But a colleage of mine, who is not working on CUDA, told me, that it cant be hard if you have the memory bandwidth, the number of processors and their speed?
What he said seems not to be consistent with what I read. This is what I could imagine could work. What do you think?
Memory processed
------------------ = runtime for memory bound kernels ?
Memory bandwidth
or
Flops
------------ = runtime for computation bound kernels?
Max GFlops
Such calculation will barely give good prediction. There are many factors that hurt the performance. And those factors interact with each other in a extremely complicated way. So your calculation will give the upper bound of the performance, which is far away from the actual performance (in most cases).
For example, for memory bound kernels, those with a lot cache misses will be different with those with hits. Or those with divergences, those with barriers...
I suggest you to read this paper, which might give you more ideas on the problem: "An Analytical Model for a GPU Architecture with Memory-level and Thread-level Parallelism Awareness".
Hope it helps.
I think you can predict a best-case with a bit of work. Like you said, with instruction counts, memory bandwidth, input size, etc.
However, predicting the actual or worst-case is much trickier.
First off, there are factors like memory access patterns. Eg: with older CUDA capable cards, you had to pay attention to distribute your global memory accesses so that they wouldn't all contend for a single memory bank. (The newer CUDA cards use a hash between logical and physical addresses to resolve this).
Secondly, there are non-deterministic factors like: how busy is the PCI bus? How busy is the host kernel? Etc.
I suspect the easiest way to get close to actual run-times is basically to run the kernel on subsets of the input and see how long it actually takes.
It seems apparent that each core of the GPU could allow for handling of a request, rather than one main processor (the system's CPU) handling all requests. On the surface, it seems like it is possible, perhaps with Templates in GPU + Redis database in GPU GDDR5?
Is it possible and worthwhile?
How would the GPU access disks, databases, etc.?
Requests are usually short sharp processing snippets. You'd have to load each request off main memory, into GPU memory, do a computation and fire it back again. There's an overhead when transferring data from main memory to GPU memory. Therefore, it's only worth doing a GPU computation if the calculation is long enough and the problem is ammenable to parallel processing on a GPU.
In essence, GPUs are good at stream processing. Not for lots of small requests.
The previous answer is valid. There is also another ceveat regarding GPU's, the instruction set is smaller and data is processed in matrices. I.e. same operation applied to every element in a set. So you will need to be very clever in designing what this replicated operations are.
I take it you are considering a GPU HTTP server.