As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I would like to state a theoretical question about the compute capabilities of the Nvidia cards.
From my relatively short experience I have noticed that cards with CC 2.0 can perform better than the 1.3 ones. That could really depend on the nature of a kernel and the occupancy each SM will use.
But since everything has its advantages and disadvantages, what are the disadvantages of a 2.0 card and the advantages of a 1.3?
How can a 1.3 card can perform a certain kernel faster than a 2.0 and what characteristics should that kernel have.
Any personal experience is well accepted and if there is a complete interpretation through the architecture of each card even better.
Regards
In general, the higher the compute capability, the more capabilities the GPU is capable of.
Check out Wikipedia
Of course, if you write bad code for a GPU with a CC of 3.5 and great code or GPU with a CC of 2.0, the 2.0 GPU can outperform the 3.5 GPU.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
By asking for the 'relative popularity' of different languages, rather than asking 'what is the best language?' or 'what is your favorite language', I hope to make this somewhat objective.
I want a language for machine learning / matrices, that is:
opensource-friendly (cf matlab)
fast for inner-loops (cf python,matlab)
fast for matrices (most languages are about the same, since they can usually use BLAS)
has terse, easy to read syntax (cf java)
I've currently settled on java, since it's average at everything, but really poor at nothing, but I can't help feeling that java feels more and more dated, eg no operator overloading, and the borked generics, so I'm wondering what the feeling on the relative popularity of different languages for machine learning is?
I think mostly people use C++, matlab and python, but just curious if there's some language that I've missed that everyone's busy using, that I didn't realize yet?
When I worked on a machine learning project with a friend, I picked up R, which is open source, designed for matrix math, and has extensive library support. It's certainly terser than Java, and I found the syntax pleasant, but that's a subjective judgement.
According to Rexer Analytics, R is the most popular data mining tool, being used by almost half of all of their survey respondents.
(Information on R is hard to search for, so they have a Google frontend for searching for information about it.)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Javascript is nice, but for the better performance, why web browser(ie/chrome,firefox,safari) do not add lua vm? or make lua vm become a part of web browser standard?
Welcome any comment
Because today's JIT compilers for Javascript are just as fast, if not faster than, JIT engines for Lua.
The web experimented with different client-scripting languages in the mid-1990s (when we had LiveScript (an early JavaScript), VBScript (thank you, Microsoft), as well as Tcl. The web decided it didn't like that and we settled on a single language (JavaScript, now EcmaScript).
Lua offers no real advantages and introduces a massive workload (the DOM API would need to be implemented, for example, and Lua has different semantics to EcmaScript (with respect to typing and how functions work, amongst other things) so the majority of web developers would need to relearn their trade.
There just isn't a business case in it.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Note: Sorry this is not exactly a programming question; please migrate it if there is a more appropriate stackexchange site (I didn't see any; it's not theoretical CS).
I'm looking for less CUDA-specific terms for certain GPU-programming related concepts. OpenCL is somewhat helpful. I'm looking for "parallelism-theory" / research paper words more than practical keywords for a new programming language. Please feel free to post additions and corrections.
"warp"
I usually equate this to SIMD-width.
"block"
alternatives
"group" (OpenCL).
"thread-block" -- want something shorter
"tile"
"syncthreads"
It seems "barrier" is the more general word, used in multicore CPU programming I think, and OpenCL. Is there anything else?
"shared memory"
alternatives
"local memory" (OpenCL).
"tile/block cache"?
"kernel"
alternatives
"CTA / Cooperative Thread Array" (OpenCL). way too much of a mouthful, dunno what it means.
"GPU program" -- would be difficult to distinguish between kernel invocations.
"device computation"?
There aren't really exact enough technology neutral terms for detailed specifics of CUDA and openCL and if you used more generic terms such as "shared memory" or "cache" you wouldn't be making clear precisely what you meant
I think you might have to stick to the terms from one technology (perhaps putting the other in brackets) or use "his/her" type language and add extra explanation if a term doens't have a corresponding use in the other
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I first got into GPGPU with my (now aging) NVIDIA 9800GT 512MB via CUDA. It seems these days my GPU just doesn't cut it.
I'm specifically interested in OpenCL, as opposed to CUDA or StreamSDK, though some info on whether either of these are still worth pursuing would be nice.
My budget is around 150 GBP plus/minus 50 GBP. I'm a little out of the loop on which GPUs are best for scientific computing (specifically fluid simulation and 3D medical image processing).
A comparison of ATI vs. NVIDIA may also be helpful, if they are really so disparate.
[I'd also be interested to hear any suggestions on games that make use of GPGPU capabilities, but that's a minor issue next to the potential for scientific computing.]
I'm also a little lost when it comes to evaluating the pros/cons of memory speed vs. clock speed vs. memory capacity, etc, so any info with regard to these more technical aspects would be most appreciated.
Cheers.
If you were going purely off OpenCL being the requirement, I would say you go with ATI because they have a released version of OpenCL 1.1 drivers where as nVidia had beta drivers almost instantly when the spec was published, but has not updated them since and they have a couple bugs from what I've read in the nVidia open OpenCL forums.
Personally I chose nVidia because it gives me all the options. You really ought to check out CUDA. It's a far more productive approach to leveraging the GPU and CPU using a common language. Down the road Microsoft's AMP language extensions for C++ are going to provide the same sort of approach as CUDA in a more platform agnostic way though and I'm sure that will be more widely adopted by the community at that point than CUDA.
Another reason to choose nVidia is because that's what the HPC system builders have been building systems with since nVidia made a huge push for GPGPU computing where as it's less backed by AMD/ATI. There really is no answer to the Tesla lineup from that camp. Even Amazon EC2 offers a GPU compute cluster based on Tesla. So, if you're looking for reach and scale beyond the desktop, I think nVidia is a better bet.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have some knowledge of C/C++ programming and want to learn CUDA. I'm also on a mac. So what is the best way to learn CUDA?
Download the dev kit, take one of the examples, and modify it. Then write something from scatch.
you can consult these resources
CUDA SDK Code Samples
CUDA by Example: An Introduction to General-Purpose GPU Programming
nvidia
Think up a numerical problem and try to implement it. Make sure that you have an NVIDIA card first. :) Download the SDK from NVIDIA web site. Read the "CUDA programming guide", it's less than 200 pages long and sufficiently well written that you should be able to do it in one pass. Pick a sufficiently simple sample and start modifying/rewriting it.