GPU is really fast when it comes to paralleled computation and out performs CPU with being 15-30 ( some have reported even 50 ) times faster however,
GPU memory is very limited compared to CPU memory and communication between GPU memory and CPU is not as fast.
Lets say we have some data what won't fit into GPU ram but we still want to use
it's wonders to compute. What we can do is split that data into pieces and feed it into GPU one by one.
Sending large data to GPU can take time and one might think, what if we would split a data piece into two and feed the first half, run the kernel and then feed the other half while kernel is running.
By that logic we should save some time because data transfer should be going on while computation is, hopefully not interrupting it's job and when finished, it can just, well, continue it's job without needs for waiting a new data path.
I must say that I'm new to gpgpu, new to cuda but I have been experimenting around with simple cuda codes and have noticed that the function cudaMemcpy used to transfer data between CPU and GPU will block if kerner is running. It will wait until kernel is finished and then will do its job.
My question, is it possible to accomplish something like that described above and if so, could one show an example or provide some information source of how it could be done?
Thank you!
is it possible to accomplish something like that described above
Yes, it's possible. What you're describing is a pipelined algorithm, and CUDA has various asynchronous capabilities to enable it.
The asynchronous concurrent execution section of the programming guide covers the necessary elements in CUDA to make it work. To use your example, there exists a non-blocking version of cudaMemcpy, called cudaMemcpyAsync. You'll need to understand CUDA streams and how to use them.
I would also suggest this presentation which covers most of what is needed.
Finally, here is a worked example. That particular example happens to use CUDA stream callbacks, but those are not necessary for basic pipelining. They enable additional host-oriented processing to be asynchronously triggered at various points in the pipeline, but the basic chunking of data, and delivery of data while processing is occurring does not depend on stream callbacks. Note also the linked CUDA sample codes in that answer, which may be useful for study/learning.
Related
I'm in trying to improve the performance of my code using asynchronous memory transfer overlapped with GPU computation.
Formerly I had a code where I created an FFT plan, and then make use of it multiple times. In such situation the time invested in creating the CUDA FFT plan is negligible althought according to this earlier post it could be quite significant.
Now that I move to streams, what I'm doing is creating the "same" plan "multiple times" and then setting the CUDA FFT stream. According to the answers given by some of you in this other post this is wasteful. But, is there any other way to do it?
NOTE: I'm acquiring the data in real-time, so launching a "batch" CUDA FFT is out of the question. What I'm doing is to create and lauch a new CUDA stream as a result of a complete pulse transmission.
NOTE 2: I was also considering using a "pool" of "CUDA Streams/FFT Plans" instead but I don't think that would be an elegant, sensible solution, any thoughts?
Is there otherwise a way to "copy" an "existent" fft plan before I assign the CUDA Stream?
Thanks guys!/gals? Hopefully meet some of you in San Jose. =)
Omar
What I'm doing is to create and lauch a new CUDA stream as a result of a complete pulse transmission.
Re-use the streams, rather than creating a new stream each time. Then you can re-use the plan created for that stream ahead of time, and you have no need to recreate the "same" plan on-the-fly.
Perhaps this is what you mean by the pool of streams method. Your criticism is that it is not "elegant" or "sensible". I have no idea what that means. Stream re-use in pipelined algorithms is a common tactic, if for no other reason than to avoid the cudaStreamCreate overhead (whatever it may be, large or small).
A cufft plan has a stream associated with it. You cannot copy a plan without the stream association. A plan is an opaque container.
I want to copy a bunch of data to device from host either at once or as a series of chunks in turn, it'll affect my algorithm. My question is that which one has more overhead? I think, invoking more than one copy operations has more overhead than the other one does; however i just want to know this concept in more details(thinking generally doesn't square with the reality:)). If you may show a reference to sort of a document, it would be really appreciated!
Kind Regards,
Ilker
If the copies are synchronous (the default), they do incur more overhead because each copy function waits until the GPU is idle before returning. If the copies are asynchronous, the overhead is a few microseconds, which will only be visible for small memcpy's.
Note that you can specify an asynchronous memcpy with the NULL stream, and you will still get the benefits of CPU/GPU concurrency. (i.e. the CPU can kick off the next memcpy while the GPU is processing the previous ones.)
The CUDA Handbook (which, in the interests of full disclosure, I should say that I wrote) covers this issue in Chapter 6, and the source code includes an app that measures the size of a memcpy that hides the overhead of invoking a memcpy. Check out breakevenMemcpyHtoD.cu in the repository:
https://github.com/ArchaeaSoftware/cudahandbook/tree/master/concurrency
TL;DR version: "What's the best way to round-robin kernel calls to multiple GPUs with Python/PyCUDA such that CPU and GPU work can happen in parallel?" with a side of "I can't have been the first person to ask this; anything I should read up on?"
Full version:
I would like to know the best way to design context, etc. handling in an application that uses CUDA on a system with multiple GPUs. I've been trying to find literature that talks about guidelines for when context reuse vs. recreation is appropriate, but so far haven't found anything that outlines best practices, rules of thumb, etc.
The general overview of what we're needing to do is:
Requests come in to a central process.
That process forks to handle a single request.
Data is loaded from the DB (relatively expensive).
The the following is repeated an arbitrary number of times based on the request (dozens):
A few quick kernel calls to compute data that is needed for later kernels.
One slow kernel call (10 sec).
Finally:
Results from the kernel calls are collected and processed on the CPU, then stored.
At the moment, each kernel call creates and then destroys a context, which seems wasteful. Setup is taking about 0.1 sec per context and kernel load, and while that's not huge, it is precluding us from moving other quicker tasks to the GPU.
I am trying to figure out the best way to manage contexts, etc. so that we can use the machine efficiently. I think that in the single-gpu case, it's relatively simple:
Create a context before starting any of the GPU work.
Launch the kernels for the first set of data.
Record an event for after the final kernel call in the series.
Prepare the second set of data on the CPU while the first is computing on the GPU.
Launch the second set, repeat.
Insure that each event gets synchronized before collecting the results and storing them.
That seems like it should do the trick, assuming proper use of overlapped memory copies.
However, I'm unsure what I should do when wanting to round-robin each of the dozens of items to process over multiple GPUs.
The host program is Python 2.7, using PyCUDA to access the GPU. Currently it's not multi-threaded, and while I'd rather keep it that way ("now you have two problems" etc.), if the answer means threads, it means threads. Similarly, it would be nice to just be able to call event.synchronize() in the main thread when it's time to block on data, but for our needs efficient use of the hardware is more important. Since we'll potentially be servicing multiple requests at a time, letting other processes use the GPU when this process isn't using it is important.
I don't think that we have any explicit reason to use Exclusive compute modes (ie. we're not filling up the memory of the card with one work item), so I don't think that solutions that involve long-standing contexts are off the table.
Note that answers in the form of links to other content that covers my questions are completely acceptable (encouraged, even), provided they go into enough detail about the why, not just the API. Thanks for reading!
Caveat: I'm not a PyCUDA user (yet).
With CUDA 4.0+ you don't even need an explicit context per GPU. You can just call cudaSetDevice (or the PyCUDA equivalent) before doing per-device stuff (cudaMalloc, cudaMemcpy, launch kernels, etc.).
If you need to synchronize between GPUs, you will need to potentially create streams and/or events and use cudaEventSynchronize (or the PyCUDA equivalent). You can even have one stream wait on an event inserted in another stream to do sophisticated dependencies.
So I suspect the answer to day is quite a lot simpler than talonmies' excellent pre-CUDA-4.0 answer.
You might also find this answer useful.
(Re)Edit by OP: Per my understanding, PyCUDA supports versions of CUDA prior to 4.0, and so still uses the old API/semantics (the driver API?), so talonmies' answer is still relevant.
I'm working on data prefetching in nVidia CUDA. I read some documents on prefetching on device itself i.e. Prefetching from shared memory to cache.
But I'm interested in data prefetching between CPU and GPU. Can anyone connect me with some documents or something regarding this matter. Any help would be appreciated.
Answer based on your comment:
when we to want perform computation on large data ideally we'll send max data to GPU,perform computation,send it back to CPU i.e SEND,COMPUTE,SEND(back to CPU) now whn it sends back to CPU GPU has to stall,now my plan is given CU program,say it runs in entire global mem,i'll compel it to run it in half of the global mem so that rest of the half i can use for data prefetching,so while computation is being performed in one half simultaneously i cn prefetch data in otherhalf.so no stalls will be there..now tell me is it feasible to do?performance will be degraded or upgraded?should enhance..
CUDA streams were introduced to enable exactly this approach.
If your compoutation is rather intensive, then yes --- it can greatly speed up your performance. On the other hand, if data transfers take, say, 90% of your time, you will save only on computation time - that is - 10% tops...
The details, including examples, on how to use streams is provided in CUDA Programming Guide.
For version 4.0, that will be section "3.2.5.5 Streams", and in particular "3.2.5.5.5 Overlapping Behavior" --- there, they launch another, asynchronous memory copy, while a kernel is still running.
Perhaps you would be interested in the asynchronous host/device memory transfer capabilities of CUDA 4.0? You can overlap host/device memory transfers and kernels by using page-locked host memory. You could use this to...
Copy working set #1 & #2 from host to device.
Process #i, promote #i+1, and load #i+2 - concurrently.
So you could be streaming data in and out of the GPU and computing on it all at once (!). Please refer to the CUDA 4.0 Programming Guide and CUDA 4.0 Best Practices Guide for more detailed information. Good luck!
Cuda 6 will eliminate the need to copy, ie the copying will be automatic.
however you may still benefit from prefetching.
In a nutshell you want the data for the "next" computation transferring while you complete the current computation. to achieve that you need to have at least two threads on the CPU, and some kind of signalling scheme (to know when to send the next data). Chunking will of course play a big role and affect performance.
The above may be easier on an APU (CPU+GPU on the same die) as the need to copy is eliminated as both processors can access the same memory.
If you want to find some papers on GPU prefetching just use google scholar.
It seems apparent that each core of the GPU could allow for handling of a request, rather than one main processor (the system's CPU) handling all requests. On the surface, it seems like it is possible, perhaps with Templates in GPU + Redis database in GPU GDDR5?
Is it possible and worthwhile?
How would the GPU access disks, databases, etc.?
Requests are usually short sharp processing snippets. You'd have to load each request off main memory, into GPU memory, do a computation and fire it back again. There's an overhead when transferring data from main memory to GPU memory. Therefore, it's only worth doing a GPU computation if the calculation is long enough and the problem is ammenable to parallel processing on a GPU.
In essence, GPUs are good at stream processing. Not for lots of small requests.
The previous answer is valid. There is also another ceveat regarding GPU's, the instruction set is smaller and data is processed in matrices. I.e. same operation applied to every element in a set. So you will need to be very clever in designing what this replicated operations are.
I take it you are considering a GPU HTTP server.