As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
By asking for the 'relative popularity' of different languages, rather than asking 'what is the best language?' or 'what is your favorite language', I hope to make this somewhat objective.
I want a language for machine learning / matrices, that is:
opensource-friendly (cf matlab)
fast for inner-loops (cf python,matlab)
fast for matrices (most languages are about the same, since they can usually use BLAS)
has terse, easy to read syntax (cf java)
I've currently settled on java, since it's average at everything, but really poor at nothing, but I can't help feeling that java feels more and more dated, eg no operator overloading, and the borked generics, so I'm wondering what the feeling on the relative popularity of different languages for machine learning is?
I think mostly people use C++, matlab and python, but just curious if there's some language that I've missed that everyone's busy using, that I didn't realize yet?
When I worked on a machine learning project with a friend, I picked up R, which is open source, designed for matrix math, and has extensive library support. It's certainly terser than Java, and I found the syntax pleasant, but that's a subjective judgement.
According to Rexer Analytics, R is the most popular data mining tool, being used by almost half of all of their survey respondents.
(Information on R is hard to search for, so they have a Google frontend for searching for information about it.)
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm putting together a simulation environment catered towards multi-agent/swarm robotic simulation. The parts I'll be doing from scratch are the math library, physics engine, and AI engine. These will be implemented in F#/CUDA. I'm looking for advice on how to best implement everything else - i.e. all of the other parts of a game engine. After a bit of research in the beginning, it seemed like OGRE would be my best bet but I'm still unsure. Basically, I want to concentrate on the physics/ai stuff and have some game engine do everything else for me - rendering, scene graphs, etc.
Something I'm confused about is how my F#/CUDA back-end will interact with the rendering engine/game engine. It seems like the popular engines are set up so you could replace parts of the architecture with your own implementations fairly easily... Which one would be best for me to use?
Since I'm using CUDA and will be rendering results from the gpu, how will this affect implementation of a rendering engine like OGRE? I know I can't be stuck just having to use straight OpenGL...
Currently, I'm looking at Ogre, Panda3D, jMonkey, and Gazebo. On the surface, Gazebo seems like it could be exactly what I'm looking for.
I am also considering Python and Lua, though I'm leaning towards the latter.
I'd like for this "simulation environment" to be able to easily converted to (or just used as) a real game engine. I know all of the elements would pretty much be there already, I'm just stating that in case it helps with giving me advice.
Also, I would really like for this thing to be able to run on the CUDA-capable Tegra 4s that are supposed to come out this quarter (last I checked). This may or may not be something that can accurately be determined at this point, but you guys will know better than me... i.e. if I want the possibility of future tegra/android use, I should do everything in jMonkey? Would it matter?
...Overall I'm most interested in advice on what architecture configuration would work well with a F#/CUDA physics & ai engine
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am new to the world of MS Access. I have already worked on it for a few days now. What I have done mostly is to look for solutions on web, do some relevant research and get going.
I was wondering if anyone could share:
What is the best way of mastering MS Access?
Any suggestions?
Although it's not the method du jour in a web-based world, my recommendation is NOT to look on the web to try to learn a piece of software because you'll be getting a scattergun effect. You'll be getting a bit here, a bit there, some good techniques, some really poor techniques (and at your skill level no way to know the difference) but more importantly... you won't be getting the required CONTEXT to help you understand the various parts of the product and how they connect into a whole.
I would therefore get a decent introductory level textbook which will normally lead you through understanding basic concepts, building on each one so that you grow into the product as you go. I don't do much with Access any more so I can't recommend a specific book but have found that O'Reilly ( http://search.oreilly.com/?q=Access&x=0&y=0 ) has a range of good books in both paper and digital formats.
If you do want to learn from the web, I'd recommend a structured training program such as the ones offered by Lynda.com or Total Training. I've subscribed to both at various times and although I'm currently a Lynda member, TT's courses are usually quite good as well. There are probably some others but I can't give any first hand recommendations on those.
Good luck with your studies.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Note: Sorry this is not exactly a programming question; please migrate it if there is a more appropriate stackexchange site (I didn't see any; it's not theoretical CS).
I'm looking for less CUDA-specific terms for certain GPU-programming related concepts. OpenCL is somewhat helpful. I'm looking for "parallelism-theory" / research paper words more than practical keywords for a new programming language. Please feel free to post additions and corrections.
"warp"
I usually equate this to SIMD-width.
"block"
alternatives
"group" (OpenCL).
"thread-block" -- want something shorter
"tile"
"syncthreads"
It seems "barrier" is the more general word, used in multicore CPU programming I think, and OpenCL. Is there anything else?
"shared memory"
alternatives
"local memory" (OpenCL).
"tile/block cache"?
"kernel"
alternatives
"CTA / Cooperative Thread Array" (OpenCL). way too much of a mouthful, dunno what it means.
"GPU program" -- would be difficult to distinguish between kernel invocations.
"device computation"?
There aren't really exact enough technology neutral terms for detailed specifics of CUDA and openCL and if you used more generic terms such as "shared memory" or "cache" you wouldn't be making clear precisely what you meant
I think you might have to stick to the terms from one technology (perhaps putting the other in brackets) or use "his/her" type language and add extra explanation if a term doens't have a corresponding use in the other
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am preparing my first course on Numerical Methods for Electrical Engineering and would like to collect feedback from colleagues with experience in the subject, about which language or environment has generated more positive results in relation to student learning. MATLAB / Octave?, Python? C / C++? It would be interesting to use the R? Sorry if the question is outside the scope of the site.
Thanks for the feedback
Octave
Gentle learning curve-- easier to start using, than C, FORTRAN or even Python. Allows you to focus your curriculum on the concepts and not the minutiae.
Uses an interpreted programming model-- students receive feedback quickly. No compile/link. Rapid feedback also encourages students to explore concepts freely.
It works very well with the command line interface. Simple is good.
Runs on many operating systems.
Many, many scripts available freely.
Large community that supports MATLAB and Octave. Help is never far away.
Installation is very simple.
Many high-level numerical function are built-in, so to speak. You can choose to let your students use them, or not. It will depend on the curriculum.
Octave is free and works very well.
The only thing I miss is IDE integration with a debugger.
Check with the other faculty. They might have an opinion about what tool sets are appropriate for the class.
Environments with a Read-Execute-Print-Loop are far, far better than anything which requires a compiler. C and C++ (and Java for that matter) impose some intellectual overheads that may not be helpful.
In all cases (Matlab, Python, R) the basic rules of Floating Point arithmetic are absolutely essential.
It seems like (almost) every week someone posts yet another question here on why
>>> 555*(1/.555)
999.9999999999999
happens in Python (or Java or C).
Please don't allow your students to ask this question here.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
How much better would commercial OCR software be compared to the stuff that's available online for free?
More specifically: Reading text in pictures (things like book covers etc...)
I work with OCR quite a lot and can definitely vouch that the commercial offerings are much better than what you can find out there for free. Yes, you can make a free one 'work', but it will take a lot of effort for sub-optimal results.
I recommend finding a product that uses the ABBYY FineReader : It does a great job with little configuration.
You may want to consider whether you need to use an SDK provided by the OCR supplier or an end-user application. The SDK will provide position details, etc of what it finds and offer a lot more in-depth control, but will be more expensive. The end-user package will basically just read everything it finds, but you may be able to set it to automatic or control it rudimentally and it might be good enough for what you're trying to do, and may be a lot cheaper.
Get a trial version and give it a go!
Google's ocropus is free opensource and one of the best