GPGPU Applications other than Image processing? - cuda

I am in search for few cpu applications which can be ported to gpgpu for better efficiency.
Else where can gpgpu be used other than image processing area ?
This is actually for my graduate project.

The specialized processing architectures of GPU compute engines are useful for just about any data crunching problem where you have:
a non-trivial amount of data
a non-trivial computation to perform on every element of that data,
the input data needed to compute each output element fits in GPU memory, or can be choreographed to arrive in GPU memory when it is needed.
It helps if the computation can be performed independently on all data elements at the same time, but this is not strictly required.
Image processing happens to be one example of that scenario - a finite (but large) number of pixels to process, and many image algorithms can be executed on each pixel in parallel.
Other examples include: generalized signal analysis, such as processing audio signals. Image processing is just a specialized form of signal analysis. Pattern recognition, where much of the challenge is to separate the signal from the noise. Voice recognition, anyone? 3 dimensional surface matching, such as figuring out the shapes of organic compounds based on the flex angles of their chemical bonds, or figuring out if two organic compounds are likely to interact in interesting ways (eg, bioreceptors). Physical modelling of all kinds (collision simulations, seismic analysis, etc). And of course, cryptography, where you can always spend more compute time going over the same data again and again.
GPU compute engines are not well suited for problems where the volume of data significantly dwarfs the computations to be performed. GPUs work well on stuff in memory. Moving data into or out of GPU memory is often the most expensive step of an entire computation, so you want to make sure you have enough computation going on to "make up" for the cost of loading the data into memory. If the data is too big to fit into memory you have to adopt distributed computing tactics.
For example, calculating a primary key index of a petabyte database probably isn't a great fit for a GPU since most of the effort will probably be spent just getting the data off the hard disk into memory. The index computation itself is fairly trivial, which doesn't make for a very interesting GPU win, and while I'm sure the data could be carved up into chunks and the chunks indexed independently by a boatload of GPU cores, variability in the data will likely prevent the GPU from operating at its full capacity. (GPU code works best when all "oarsmen" (processor cores / threads) are pulling in the same direction - uniform execution on separate data) While database indexing might see some benefit by using a GPU approach, it certainly won't be as big of a performance improvement over CPU baseline as something better suited to the GPU execution model constraints - like signal processing.

Brute force crypto attacks? MD5 has been done by Whitepixel and SHA-256 has been done by all the Bitcoin miners. On the other hand, I am not aware of any GPU implementations of bcrypt() or scrypt(), but an academic working in the area is probably a better person to ask.

Easiest way to figure out what kinds of applications are well-suited for GPGPU is to look at the speedups other groups have achieved. Here are a couple of links with that information:
NVIDIA's case studies
AccelerEyes' case studies
Looks like military/aerospace, life science, energy, finance, manufacturing, media, and a few other smaller industries have examples of strong speedups.

Related

Utilizing GPU worth it?

I want to compute the trajectories of particles subject to certain potentials, a typical N-body problem. I've been researching methods for utilizing a GPU (CUDA for example), and they seem to benefit simulations with large N (20000). This makes sense since the most expensive calculation is usually finding the force.
However, my system will have "low" N (less than 20), many different potentials/factors, and many time steps. Is it worth it to port this system to a GPU?
Based on the Fast N-Body Simulation with CUDA article, it seems that it is efficient to have different kernels for different calculations (such as acceleration and force). For systems with low N, it seems that the cost of copying to/from the device is actually significant, since for each time step one would have to copy and retrieve data from the device for EACH kernel.
Any thoughts would be greatly appreciated.
If you have less than 20 entities that need to be simulated in parallel, I would just use parallel processing on an ordinary multi-core CPU and not bother about using GPU.
Using a multi-core CPU would be much easier to program and avoid the steps of translating all your operations into GPU operations.
Also, as you already suggested, the performance gain using GPU will be small (or even negative) with this small number of processes.
There is no need to copy results from the device to host and back between time steps. Just run your entire simulation on the GPU and copy results back only after several time steps have been calculated.
For how many different potentials do you need to run simulations? Enough to just use the structure from the N-body example and still load the whole GPU?
If not, and assuming the potential calculation is expensive, I'd think it would be best to use one thread for each pair of particles in order to make the problem sufficiently parallel. If you use one block per potential setting, you can then write out the forces to shared memory, __syncthreads(), and use a subset of the block's threads (one per particle) to sum the forces. __syncthreads() again, and continue for the next time step.
If the potential calculation is not expensive, it might be worth exploring first where the main cost of your simulation is.

OpenCL for GPU vs. FPGA

I read recently about OpenCL/CUDA for FPGA vs. GPU
As I understood FPGA wins in power criteria.
The explanation for that ,I`ve found in some article:
Reconfigurable devices can have much lower power consumption from peak
values since only configured portions of the chip are active
Based on said above I have a question - does it mean that ,if some CU [Compute Unit] doen`t execute any work-item,it still consumes power? (and if yes - what for it consumes power?)
Yes, idle circuitry still consumes power. It doesn't consume as much, but it still consumes some. The reason for this is down to how transistors work, and how CMOS logic gates consume power.
Classically, CMOS logic (the type on all modern chips) only consumes power when it switches state. This made is very low power when compared to the technologies that came before it which consumed power all the time. Even so, every time a clock edge occurs, some logic changes state even if there's no work to do. The higher the clock rate, the more power used. GPUs tend to have high clock rates so they can do lots of work; FPGAs tend to have low clock rates. That's the first effect, but it can be mitigated by not clocking circuits that have no work to do (called 'clock gating')
As the size of transistors became smaller and smaller, the amount of power used when switching became smaller, but other effects (known as leakage) became more significant. Now we're at a point where the leakage power is very significant, and it's multiplied up by the number of gates you have in a design. Complex designs have high leakage power; Simple designs have low leakage power (in very basic terms). This is a second effect.
Hence, for a simple task it may be more power efficient to have a small dedicated low speed FPGA rather than a large complex, but high speed / general purpose CPU/GPU.
As always, it depends on the workload. For workloads that are well-supported by native GPU hardware (e.g. floating point, texture filtering), I doubt an FPGA can compete. Anecdotally, I've heard about image processing workloads where FPGAs are competitive or better. That makes sense, since GPUs are not optimized to operate on small integers. (For that reason, GPUs often are uncompetitive with CPUs running SSE2-optimized image processing code.)
As for power consumption, for GPUs, suitable workloads generally keep all the execution units busy, so it's a bit of an all-or-nothing proposition.
Based on my research on FPGAs and the way they work, these devices can be designed to be very power efficient and really fine-tuned for one special task (e.g., an algorithm) and use the smallest resources possible (therefore the lower amount of energy consumption among all possible choices except ASIC)
When implementing turning-complete algorithms using FPGAs, the designers have the option of either unrolling their algorithms to use the maximum parallelism offered or use a compact sequential design. Each method has its own cost-benefits; the former helps maximizing performance at the cost of higher resource consumption, and the latter helps minimizing area and resource consumption by reusing hardware at the cost of minimizing the performance.
This level of control over implementation of algorithms doesn’t exist when developing for GPUs. The developers have the control to use the most efficient algorithms yet they are not the one determining the final precise hardware implementation of their algorithms. Unlike FPGA designers who even count “nano-seconds” when calculating their design’s hardware implementation (using post-layout tools), GPU developers rely on available frameworks to enhance all implementation details for them automatically. They develop at much higher levels compared to FPGA designers.
So the well known topic of trade-offs pops up here too; you want exact control over the hardware implementation at the cost of longer development times? Choose FPGAs. You want parallelism, yet have made up your mind to give up exact control over hardware implementation and want to develop using your existing software skills? use OpenCL.
Kudos to #hamzed, but OpenCL is not taking control away from the designer of OpenCL on FPGAs. It actually gives the best of the both worlds: full programmability of FPGA with all custom parallel algorithm benefits as well as much better design closure speed vs. RTL. By being clever about your algorithm moving and not moving data you can get to near theoretical performance of FPGAs. Please see the last chart in this reference: https://www.iwocl.org/wp-content/uploads/iwocl2017-andrew-ling-fpga-sdk.pdf

Why do GPU based algorithms perform faster

I just implemented an algorithm on the GPU that computes the difference btw consecutive indices of an array. I compared it with a CPU based implementation and noticed that for large sized array, the GPU based implementation performs faster.
I am curious WHY does the GPU based implementation perform faster. Please note that i know the surface reasoning that a GPU has several cores and can thus do the operation is parallel i.e., instead of visiting each index sequentially, we can assign a thread to compute the difference for each index.
But can someone tell me a deeper reason as to why GPU's perform faster. What is so different about their architecture that they can beat a CPU based implementation
They don't perform faster, generally.
The point is: Some algorithms fit better into a CPU, some fit better into a GPU.
The execution model of GPUs differs (see SIMD), the memory model differs, the instruction set differs... The whole architecture is different.
There are no obvious way to compare a CPU versus a GPU. You can only discuss whether (and why) the CPU implementation A of an algorithm is faster or slower than a GPU implementation B of this algorithm.
This ended up kind of vague, so a tip of an iceberg of concrete reasons would be: The strong side of CPU is random memory access, branch prediction, etc. GPU excels when there's a high amount of computation with high data locality, so that your implementation can achieve a nice ratio of compute-to-memory-access. SIMD makes GPU implementations slower than CPU where there's a lot of unpredictable braching to many code paths, for example.
The real reason is that a GPU not only has several cores, but it has many cores, typically hundreds of them! Each GPU core however is much slower than a low-end CPU.
But the programming mode is not at all like multi-cores CPUs. So most programs cannot be ported to or take benefit from GPUs.
While some answers have already been given here and this is an old thread, I just thought I'd add this for posterity and what not:
The main reason that CPU's and GPU's differ in performance so much for certain problems is design decisions made on how to allocate the chip's resources. CPU's devote much of their chip space to large caches, instruction decoders, peripheral and system management, etc. Their cores are much more complicated and run at much higher clock rates (which produces more heat per core that must be dissipated.) By contrast, GPU's devote their chip space to packing as many floating-point ALU's on the chip as they can possibly get away with. The original purpose of GPU's was to multiply matricies as fast as possible (because that is the primary type of computation involved in graphics rendering.) Since matrix multiplication is an embarrasingly parallel problem (e.g. each output value is computed completely independently of every other output value) and the code path for each of those computations is identical, chip space can be saved by having several ALU's follow the instructions decoded by a single instruction decoder, since they're all performing the same operations at the same time. By contrast, each of a CPU's cores must have its own separate instruction decoder since the cores are not following identical code paths, which makes each of a CPU's cores much larger on the die than a GPU's cores. Since the primary computations performed in matrix multiplication are floating-point multiplication and floating-point addition, GPU's are implemented such that each of these are single-cycle operations and, in fact, even contain a fused multiply-and-add instruction that multiplies two numbers and adds the result to a third number in a single cycle. This is much faster than a typical CPU, where floating-point multiplication is often a many-cycle operation. Again, the trade-off here is that the chip space is devoted to the floating-point math hardware and other instructions (such as control flow) are often much slower per core than on a CPU or sometimes even just don't exist on a GPU at all.
Also, since GPU cores run at much lower clock rates than typical CPU cores and don't contain as much complicated circuitry, they don't produce as much heat per core (or use as much power per core.) This allows more of them to be packed into the same space without overheating the chip and also allows a GPU with 1,000+ cores to have similar power and cooling requirements to a CPU with only 4 or 8 cores.

I read that Huffman coding does not work on GPU but this paper claims otherwise

I have read in several places that building a huffman encoder in GPU is not very efficient because the algorithm is sequential. But this paper offers a possible implementation and claims it to be faster than CPU http://tesla.rcub.bg.ac.rs/~taucet/docs/papers/PAVLE-AnaBalevic09.pdf .
Please advice if the results of the paper are incorrect
It looks like an interesting approach but I'll just offer one caveat: there is very little information about the baseline CPU implementation, but it is most likely single threaded and may not be particularly optimised. It's human nature for people to want to make their optimised implementation look as good as possible, so they tend to use a mediocre baseline benchmark in order to give an impressive speed up ratio. For all we know it may be that a suitably optimised multi-threaded implementation on the CPU could match the GPGPU performance, in which case the GPGPU implementation would not be so impressive. Before investing a lot of effort in a GPGPU implementation I would want to first exhaust all the optimisation possibilities on the CPU (perhaps even using the parallel algorithm as described in the paper, maybe exploit SIMD, threading, etc), since a CPU implementation that meets your performance requirements would be a lot more portable and useful than a solution tied to a particular GPU architecture.
You are right - Huffman algorithm is sequential, though it's not a bottleneck for high speed encoding. Please have a look at the following session on GTC 2012. This is real solution, not just an example.
You can find there some benchmarks for CPU and GPU concerning Huffman encoding and decoding. Huffman encoding on GPU is much faster than on CPU. JPEG decoding on GPU could be much slower in comparison with CPU only in the case when there are no restart markers in JPEG image.
If you need Huffman not for JPEG, then one should use two-pass algorithm. At first pass one can collect statistics and do encoding on the second pass. Both passes could be done in parallel, so it's better to use GPU instead of CPU.
There are a lot of papers saying that GPU is not suitable for Huffman. It just means that there were a lot of attempts to solve the problem. The idea for solution is quite simple: do Huffman encoding for small chunks of data and try to process these chunks in parallel.

Where does super-linear speedup come from?

In parallel computing theoretically super-linear speedup is not possible. But in practice we do see such cases. One reason is cache effect but I fail to understand what does it play. Also, there are other things involved but what are they? In summary,
How are super-linear speedups possible?
I'm a beginner with respect to parallel computing.
Suppose you have an 8 processor machine, each processor has a 1MB cache, and your computation uses 6MB of data.
On 1 processor the computation will be doing a lot of data movement between CPU, cache and RAM. On 8 processors the computation will only have to move data between CPU and cache. This way you can achieve super-linear speedup.
These figures and this analysis have been simplified for exposition for a beginner.
In short, superlinear speedup is achieved when the total amount of work processors do is strictly less than the total work performed by a single processor.
This can happen in three ways:
The original sequential algorithm was really bad, using the parallel version of the algorithm on one processor will usually do away with the superlinear speedup.
The parallel algorithm uses some search like a random walk, the more processors that are walking, the less distance has to be walked in total before you reach what you are looking for.
Modern processors have faster and slower memories. Typically it will try to keep the data you are using in the fast memory. We can safely say your amount of data is larger than the amount of fast memory. If you use n processors you have n times the amount of faster memory. More data fits in the fast memory which makes it possible to take less time (thus amount of work) on the same task.