I am looking for options to serve parallel predictions using caffe model from GPU. Since GPU comes with limited memory, what are the options available to achieve parallelism by loading the net only once?
I have successfully wrapped my segmentation net with tornado wsgi + flask. But at the end of the day, this is most equivalent serving from a single process. https://github.com/BVLC/caffe/blob/master/examples/web_demo/app.py.
Is having my own copy of net for each process a strict requirement, since the net is read-only after the training is done? Is it possible to rely on fork for parallelism?
I am working on a sample app which serves result from segmentation model. It utilizes copy on write and loads the net in the master once and serve memory references for the forked children. I am having trouble starting this setup in a web server setting. I get a memory error when I try to initialize the model. The web server I am using here is uwsgi.
Have anyone achieved parallelism by loading the net only once (since GPU memory is limited) and achieved parallelism for serving layer? I would be grateful if any one of you can point me in the right direction.
Related
I'm confused about how exactly NCCL performs AllReduce operation in a distributed training. Let's assume there are two nodes with one GPU respectively. They are running a DNN training task in the data parallelism mode using Horovod. In the back-propagation phase, they need to perform AllReduce operations of their gradients. Here, GPU on each node needs to send it's local gradient to the other node. I want to understand the data flow during this AllReduce operation. Here is my understanding:
CPU copies the gradient from GPU's DRAM to the host DRAM
CPU sends the gradient to the other node via NIC
Here, I'm curious what does the CUDA kernels launched by NCCL is doing. I don't see any GPU involvement during this communication (the memory copy is performed by CPU), so looks like GPU is doing nothing here. Could anyone gives some insight on this? Thanks!
We're trying to develop a Natural Language Processing application that has a user facing component. The user can call models through an API, and get the results back.
The models are pretrained using Keras with Theano. We use GPUs to speed up the training. However, prediction is still sped up significantly by using the GPU. Currently, we have a machine with two GPUs. However, at runtime (e.g. when running the user facing bits) there is a problem: multiple Python processes sharing the GPUs via CUDA does not seem to offer a parallelism speed up.
We're using nvidia-docker with libgpuarray (pygpu), Theano and Keras.
The GPUs are still mostly idle, but adding more Python workers does not speed up the process.
What is the preferred way of solving the problem of running GPU models behind an API? Ideally we'd utilize the existing GPUs more efficiently before buying new ones.
I can imagine that we want some sort of buffer before sending it off to the GPU, rather than requesting a lock for each HTTP call?
This is not an answer to your more general question, but rather an answer based on how I understand the scenario you described.
If someone has coded a system which uses a GPU for some computational task, they have (hopefully) taken the time to parallelize its execution so as to benefit from the full resources the GPU can offer, or something close to that.
That means that if you add a second similar task - even in parallel - the total amount of time to complete them should be similar to the amount of time to complete them serially, i.e. one after the other - since there are very little underutilized GPU resources for the second task to benefit from. In fact, it could even be the case that both tasks will be slower (if, say, they both somehow utilize the L2 cache a lot, and when running together they thrash it).
At any rate, when you want to improve performance, a good thing to do is profile your application - in this case, using the nvprof profiler or its nvvp frontend (the first link is the official documentation, the second link is a presentation).
I managed to successfully run CUDA programs on a GeForce GTX 750 Ti while using a AMD Radeon HD 7900 as the rendering device (actually connected to the display) using this guide; for instance, the Vector Addition sample runs nicely. However, I can only run applications that do not produce visual output. For example, the Mandelbrot CUDA sample does not run and fails with an error:
Error: failed to get minimal extensions for demo:
Missing support for: GL_ARB_pixel_buffer_object
This sample requires:
OpenGL version 1.5
GL_ARB_vertex_buffer_object
GL_ARB_pixel_buffer_object
The error originates from asking glewIsSupported() for these extensions. Is there any way to run an application, like these CUDA samples, so that the CUDA operations are run on the GTX as usual but the Window is drawn on the Radeon card? I tried to convince Nsight Eclipse to run a remote debugging session, with my own PC as the remote host, but something else failed right away. Is this supposed to actually work? Could it be possible to use VirtualGL?
Some of the NVIDIA CUDA samples that involve graphics, such as the Mandelbrot sample, implement an efficient rendering strategy: they bind OpenGL data structures - Pixel Vertex Objects in the case of Mandelbrot - to the CUDA arrays containing the simulation data and render them directly from the GPU. This avoids copying the data from the device to the host at end of each iteration of the simulation, and results in a lightning fast rendering phase.
To answer your question: NVIDIA samples as they are need to run the rendering phase on the same GPU where the simulation phase is executed, otherwise, the GPU that handles the graphics would not have the data to be rendered in its memory.
This does not exclude that the samples can be modified to work with multiple GPUs. It should be possible to copy the simulation data back to the host at end of each iteration, and then render it using a custom method or even send it over the network. This would require to (1) modify the code, by separating and making independent simulation and rendering phases, and (2) accept the big loss in frame per second that would result from this.
I am using a GPU cluster without GPUDirect support. From this briefing, the following is done when transferring GPU data across nodes:
GPU writes to pinned sysmem1
CPU copies from sysmem1 to sysmem2
Infiniband driver copies from sysmem2
Now I am not sure whether the second step is an implicit step when I transfer sysmem1 across Infiniband using MPI. By assuming this, my current programming model is something like this:
cudaMemcpy(hostmem, devicemem, size, cudaMemcpyDeviceToHost).
MPI_Send(hostmem,...)
Is my above assumption true and will my programming model work without causing communication issues?
Yes, you can use CUDA and MPI independently (i.e. without GPUDirect), just as you describe.
Move the data from device to host
Transfer the data as you ordinarily would, using MPI
You might be interested in this presentation, which explains CUDA-aware MPI, and gives an example side-by-side on slide 11 of non-cuda MPI and CUDA-MPI
I have a question referring to Griffon.
Is there a way to decrease memory consumption of griffon applications?
Actually the sample griffon application process (just single window with
label) takes in Windows ~80MB. Is there a way to change something to
visibly decrease this basic memory usage?
Griffon is a great solution etc. but my customer complains that a simple
application takes such an amount of memory (more than e.g. Word, Outlook,
or most of complicated Java application - comparable with whole).
A barebones Griffon app (just by calling create-app and nothing more) reports 49M of memory usage. In terms of file size it's a bit above 7M. Whereas a java base Griffon app (griffon create-app sample --file-type=java) rises up to 42M of memory usage; same file size.
This is of course using default settings provided by the run-app command. Further memory configuration settings may be applied to limit and streamline resource consumption.