Can you use Amazon EC2 GPU instances for real-time rendering? - cuda

I need a remote PC/server which has a decent 3D card in it, to perform real-time 3D rendering... imagine running a 3D game on a remote server and that's a good comparison.
Most VPS and dedicated servers do not have good graphics capabilities for obvious reasons but Amazon do have special GPU instances. They're sold for GPGPU computation, using the GPU for data-crunching using tools like CUDA, but I wondered if they could also be used for real-time 3D rendering.
Can anyone provide a solid answer to that?
Edit: I should add it's my own 3d code and I want to know the capabilities of EC2 for this purpose, not a generic EC2 question

Amazon GPU servers are equipped with NVidia Tesla GPUs .While these are best at doing GPGPU they also have more than average capabilities for real-time graphics rendering.Though in this respect they are inferior to NVidia GTX cards (see GPU specs on NVidia website).
Now , you can use Amazon for real-time rendering but your bottleneck will be the network bandwidth.Tesla cards can be used with OpenGL to render graphics into Offscreen buffers very very fast , but then you should find the way to read pixels for each rendered frame and stream it to the client with acceptable frame rate.OpenGL pixel read from GPU is already very slow (though you can do some hacks using PBOs ping pongs) but I don't really think you can stream pixel packages ,with standard resolutions (800x600 or even less)from remote server ,so that the client gets it at minimal acceptable frequency.I do believe it will be possible in the future :)
P.S My answer is based on personal experience with Amazon EC2

Yes, amazon EC2 is well-suited for rendering. I've been doing this at large scale for over 3 years for a mobile application. Throughput has been fine for short animations which I move from EC2 to S3/CloudFront.

Related

Display via Tesla graphic card

I want to display a processed video on monitor. For video processing in CUDA, I am thinking to get a Nvidia Tesla grade card, but it does not have any video out port. Is there a way to create the frame buffer on the Tesla GPU card, then transfer it to system memory and display via motherboard graphics?
PS: I don't want to compute anything on CPU, to have a near real time performance.
For video processing (and display), and given what I understand of your problem, Tesla is probably not your best choice.
Tesla cards are expensive, (partly) because of double precision support, which you don't need for video processing
Tesla cards don't have any video port, meaning you have to send back your frames to system (obviously possible). That means a performance penalty, and more code to write and maintain.
Did you have a look to Quadro product line? They have display ouput, and are usually meant for this kind of applications (but still expensive).
If you want to display, that probably means you work on a desktop application. So you graphics card won't work 24/7 in full compute load? In that case, why not a GeForce?

Chromebook running ubuntu - hardware required for unity

I'm looking to buy a chromebook and install either Ubuntu 14 or Ubuntu 16 on it. I looked at the unity specs, and did some research, but it doesn't appear certain unity will run.
I'm wondering, what specs for a chromebook will I need to run the unity GUI interface on it to do some light development work?
Further, Is dual-core processor enough to run unity or do I need quad-core cpu? Do I need 4gb of ram, or more?
Also, if you can recommend one that will work for this need?
Thank you
You'll need at least, and probably more than, 4GB of RAM in order to use Unity effectively in a Linux environment.
Dual core should be enough for things to run, however, everything is going to be more responsive if you're using a quad core system.
You will need to get the best graphics hardware you can find, Intel HD may work but I would be more optimistic about a Tegra GPU being capable of running Unity. Graphics drivers will probably be a hurdle here.
A Chromebook is going to run out of disk space very quickly. Unity itself takes around 2.5 GB after it is installed, and each game project, depending on graphics and audio resources is going to consume disk space very quickly. 32GB hard drive would be the absolute minimum, and I can still foresee the inevitably full hard drive causing issues.
Ultimately I would suggest finding a laptop with higher specs than a typical Chromebook if you're serious about using Unity on it.
My best advice here, though, is don't buy a Chromebook for this purpose unless you're confident in the retailer being open-minded about returns.

Does watching HD videos slow down my program using the CUDA CPU? [duplicate]

I'm trying to figure out if I can use OpenACC in place of normal CPU serial execution calls. Usually my programming is all about 3D programming, or uses the GPU normally in some way. I.E. Image processing, or some other type of rendering that requires the use of shaders. I'm trying to figure out if this Library would benefit me or not.
The reason I ask this is because if I'm rendering 3D Graphics (as fast as possible) would it slow down that process in away? Or is it able to maintain it's (in theory) "high frame rates" or not.
If so, what's the trade off, and how much? I'm not willing to loose 3D Graphics (display) performance to enhance operations that can be done on the CPU serially.
Edit:
This is a C++ context.
On the AMD and NVIDIA GPUs that I am familiar with, OpenACC programs will make use of compute resources that would also be used to some degree by shader programs. There are many other pieces of graphics hardware in a GPU that are not shared between compute and graphics, but there are some shared resources. Likewise, the GPU may be connected to the system by PCIE, and so this can also present a shared resource or contention point (however it's the rare compute or graphics program that would even come close to using up the bandwidth of a modern Gen3 x16 PCIE connection.)
So if you were using both graphics (or compute) shaders, as well as OpenACC acceleration, there would be contention for resources, to some degree. The level of contention, or the trade off, is not something that I can generalize about. It will depend very much on the specifics of your program, and the extent and the detail sequencing of the compute functions and the graphics functions.
GPU designers have these types of use-cases in mind, and so GPUs are generally pretty good at rapid context switching between the various tasks that may compete for resources.

Developing for CUDA on "cheap" GPUs

I develop algorithms in CUDA on my desktop which should later run on a server.
Is it okay to use a recent low end card (like compute capability 2.1) to get all the nice debug and profiling features and then put the code on the server with the high end card (with the same cc)? Would I just need to adjust the thread/mesh sizes, or does it change everything™.
Example: I would develop on a Quadro 600 and the server would use a Tesla C2075.
As long your kernel call and kernel itself is scalable you have no problem.
Check out this question:
CUDA development on different cards?
There are some issues, like memory bandwith being different (25.6 GiB/s on Quadro and 148 GiB/s on Tesla, according to your links), or different number of SMs (the driver could distribute blocks across SMs differently). However, in most cases such small diffrences are of minor importance.
If the server has more than one GPU installed then you need to change your code to run on Multi-GPU to fully leverage the power of the server. Although the same code will run fine but on a single card.
In case there's only one card on server; general rule of thumb is that you do not need to change any line of code to harness the power of the stronger GPU as the driver distributes the load among SMs automatically.

Use NVIDA card for CUDA, motherboard for video

I want use the motherboard as the primary display adapter and my NVIDIA graphics card as a dedicated CUDA processor. My first thought was to simply plug the monitor's VGA cable into the motherboard's VGA port and hope the BIOS was smart enough to use the on-board video as the display adapter when it booted. That didn't work. The BIOS must have detected the NVIDIA card and continued to use it as the display adapter. The next thing I looked for was a setting in the BIOS to tell it "don't use the the NVIDIA 560 as the display adapter, use the on-board video as the display adapter". I search through the BIOS and the Web, but either this cannot be done or I cannot figure out how to do it. The mobo is a BIOSTAR TH67+ LGA 1155. Windows 7 OS.
RESULTS SUMMARY (from answers provided below)
Enabling the Integrated Graphics Device (IGD) in the BIOS will allow the system to be driven from the on-board graphics even with the graphics card connected to the system bus. However, the graphics card cannot be used for CUDA processing. Windows will not enable graphics devices unless a monitor is attached to them. The normal driver stack cannot see them. Solution: use Linux, or attach a display to the graphics card but do not use it. The Tesla cards (GPGPU-only) are not recognized by Windows as graphics devices, so they don't suffer from this.
Also ,a newer BIOSTAR motherboard, the TZ68A+, supports the Virtu drivers which permit sophisticated simultaneous use of the graphics cards and on-board video.
Looking at the BIOS manual (.zip), the setting you probably want is Chipset -> North Bridge -> Initiate Graphics Adapter. Try setting it to IGD (Integrated Graphics Device).
I believe this will happen automatically as the native video won't support CUDA. After installing the SDK, if you run DeviceQuery, do you see more than one result?
I believe h67 allows coexistence of both integrated & dedicated GPU. Check out Lucid Virtu here http://www.lucidlogix.com/driverdownloads-virtu.html it allows switching GPUs on the fly. But I don't know if it affects CUDA device query.
I never tried it on my rig, because its x58, I just heard it from tomshardware. Try it out and let us know. Lucid Virtu is definitely worth a try, its free, and it can cut you electric bill.