As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
No matter how proficient you are at programming C or Java, you should consider adding multi-threaded programming to your set of skills.
This is NOT something you should try to learn on your own. It is MUCH harder to learn than sequential programming. If you are a Technical Manager, you SHOULD invest in retraining your key staff in multi-threaded programming. You might also monitor the research activities in concurrent programming languages (like those listed above). You can be sure that your competitors will.
This is a quote from this article. I imagine most of us here are very proficient in teaching ourselves different languages, data structures, algorithms, etc, and I do recognize the mental shift that needs to occur to do parallel programming right.
I reject the idea that one cannot learn parallel programming "right" on their own. So what's the most responsible way to teach oneself parallel programming? What books and other resources are recommended?
Edit: Here are some more details. I would be mostly applying these to scientific computing, but I was looking for general, language-agnostic material/advice. I am also looking for a heathy dose of practical theory. Imagine you have an excellent developer who loves math and computer science, but never took a course on parallel programming. Now imagine he has a deadline for a problem (say 1 year), and you have to give him the materials to figure out whether parallelization would be helpful, and how to implement it right. What resources would you give him? That is how I (and I hope other developers) would be interested in learning parallelization/multi-threading.
If you pitched up in my workplace and asked that question I'd throw a couple of books at you:
Introduction to Parallel Computing and Parallel Scientific Computing
Your response may well be 'that's not what I want to learn about !' so come back and be a bit more specific in your question and we'll be able to be a bit more specific in our answers.
But the most 'responsible' way to teach yourself this stuff is the same as the most responsible way to teach yourself any programming stuff: get a problem, get a toolbag, get a deadline, and get cracking.
For Microsoft technology, there is a wealth of information at the MSDN Parallel Computing portal here. You could start out with the Getting Started links.
That article is basically an advertisement for training services. You should treat a salesman's opinion of the value of his own products with a degree of circumspection.
I've no idea how you learned everything else you already know about computers, but if that worked for you I'd stick with the same approach for the next thing you want to learn.
I can't recommend any language/platform agnostic books - I suspect they'd be very academic anyway. If you're actually on .NET, then Jeff Richter writes a lot of good stuff about threading, and I believe the 3rd edition of his C#/CLR book (earlier editions were excellent) has a great deal about parallel programming.
if you read everything Google finds for the stuff below, you'll have a pretty good start. Assuming general IT background, etc. These are not language or OS specific:
Peterson's algorithm
atomic test-and-set
critical section
rendezvous
memory barriers
lock-free algorithms
The mathematical background of this is probably Petri net.
Read Dijkstra.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm putting together a simulation environment catered towards multi-agent/swarm robotic simulation. The parts I'll be doing from scratch are the math library, physics engine, and AI engine. These will be implemented in F#/CUDA. I'm looking for advice on how to best implement everything else - i.e. all of the other parts of a game engine. After a bit of research in the beginning, it seemed like OGRE would be my best bet but I'm still unsure. Basically, I want to concentrate on the physics/ai stuff and have some game engine do everything else for me - rendering, scene graphs, etc.
Something I'm confused about is how my F#/CUDA back-end will interact with the rendering engine/game engine. It seems like the popular engines are set up so you could replace parts of the architecture with your own implementations fairly easily... Which one would be best for me to use?
Since I'm using CUDA and will be rendering results from the gpu, how will this affect implementation of a rendering engine like OGRE? I know I can't be stuck just having to use straight OpenGL...
Currently, I'm looking at Ogre, Panda3D, jMonkey, and Gazebo. On the surface, Gazebo seems like it could be exactly what I'm looking for.
I am also considering Python and Lua, though I'm leaning towards the latter.
I'd like for this "simulation environment" to be able to easily converted to (or just used as) a real game engine. I know all of the elements would pretty much be there already, I'm just stating that in case it helps with giving me advice.
Also, I would really like for this thing to be able to run on the CUDA-capable Tegra 4s that are supposed to come out this quarter (last I checked). This may or may not be something that can accurately be determined at this point, but you guys will know better than me... i.e. if I want the possibility of future tegra/android use, I should do everything in jMonkey? Would it matter?
...Overall I'm most interested in advice on what architecture configuration would work well with a F#/CUDA physics & ai engine
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
By asking for the 'relative popularity' of different languages, rather than asking 'what is the best language?' or 'what is your favorite language', I hope to make this somewhat objective.
I want a language for machine learning / matrices, that is:
opensource-friendly (cf matlab)
fast for inner-loops (cf python,matlab)
fast for matrices (most languages are about the same, since they can usually use BLAS)
has terse, easy to read syntax (cf java)
I've currently settled on java, since it's average at everything, but really poor at nothing, but I can't help feeling that java feels more and more dated, eg no operator overloading, and the borked generics, so I'm wondering what the feeling on the relative popularity of different languages for machine learning is?
I think mostly people use C++, matlab and python, but just curious if there's some language that I've missed that everyone's busy using, that I didn't realize yet?
When I worked on a machine learning project with a friend, I picked up R, which is open source, designed for matrix math, and has extensive library support. It's certainly terser than Java, and I found the syntax pleasant, but that's a subjective judgement.
According to Rexer Analytics, R is the most popular data mining tool, being used by almost half of all of their survey respondents.
(Information on R is hard to search for, so they have a Google frontend for searching for information about it.)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am preparing my first course on Numerical Methods for Electrical Engineering and would like to collect feedback from colleagues with experience in the subject, about which language or environment has generated more positive results in relation to student learning. MATLAB / Octave?, Python? C / C++? It would be interesting to use the R? Sorry if the question is outside the scope of the site.
Thanks for the feedback
Octave
Gentle learning curve-- easier to start using, than C, FORTRAN or even Python. Allows you to focus your curriculum on the concepts and not the minutiae.
Uses an interpreted programming model-- students receive feedback quickly. No compile/link. Rapid feedback also encourages students to explore concepts freely.
It works very well with the command line interface. Simple is good.
Runs on many operating systems.
Many, many scripts available freely.
Large community that supports MATLAB and Octave. Help is never far away.
Installation is very simple.
Many high-level numerical function are built-in, so to speak. You can choose to let your students use them, or not. It will depend on the curriculum.
Octave is free and works very well.
The only thing I miss is IDE integration with a debugger.
Check with the other faculty. They might have an opinion about what tool sets are appropriate for the class.
Environments with a Read-Execute-Print-Loop are far, far better than anything which requires a compiler. C and C++ (and Java for that matter) impose some intellectual overheads that may not be helpful.
In all cases (Matlab, Python, R) the basic rules of Floating Point arithmetic are absolutely essential.
It seems like (almost) every week someone posts yet another question here on why
>>> 555*(1/.555)
999.9999999999999
happens in Python (or Java or C).
Please don't allow your students to ask this question here.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I want to make really good websites.
Pet projects do really help in exploring blank spots and consciously mastering aspects you want to acquire.
I'm happy with the process and the result but what to do next with this product?
Users start to use it and I don't want to support them because I want to continue building my skills, not end user products. Hosting for all these web apps cost some money too.
Should I just code it till I find it interesting and technically challenging and then just junk it and move to another one? Or release everything as Open Source and don't care about support?
The reason why I don't want to do support is because I want to specialize. There is too wide topic already so I don't want to wide it even further.
I heard about the idea that you shouldn't care a lot about what you working on during your early years in development because pretty much everything will be junk. So just try more. Is it the way I should follow?
What is more effective?
You don't say how technical your websites will be - are you looking to build pure HTML, more dynamic DHTML or web applications with server scripts?
Regardless, you should look at taking on pet projects that challenge you in a variety of areas. As you get more experienced with web projects, you'll find that you end up doing many tasks over and over for different projects.
A good start would be to take on pet projects to tackle 'common' problems (layouts, styling, user logins/sessions, persisting data etc) and then look to abstract your work to a series of components that you can reuse in future projects. This way you will build up a library of reusable 'widgets' that means you don't have to scrap everything and start again each time.
As you get more experienced, set yourself tougher challenges, and before long you'll have a considerable arsenal of sample code, if architected well it will be mostly reusable, and at the same time you'll expand your experience.
Good luck!
I wouldn't say your early years are junk. Let's not think about the obvious intangible gains, like experience, insight, pattern recognition..etc....one tangible gain to think about is the development and organization of your own method libraries. My early years were in ASP so having my own set of includes was invaluable to my overall success and efficiency. I'd carry over a set of utilities and databasing methods that I had previously developed so new projects got easier and easier because the bare bones stuff had already been vetted.
It may not be important as it was 10 years ago because class libraries are getting so robust ...but you'll still find that as you develop new projects it'll be very useful to have utilities that you've already developed and organized into your very own class libraries.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I was looking around SO to find some exercises or interesting problems to do when learning a new language. Mostly of the time learning a language directly from reading the book does not work, even when a book tries to make an application from scratch to end.
Besides Code Kata, 15 Exercises to know... and Project Euler what are some other resources?
Also, what if the language in question is mainly used for web development..or that is the main intent of the developer for learning the language. In those cases, I doubt stuff in Project Euler will help. Are there sets of functionalities that should be implemented in a web app for a developer to feel confident about his skills in that language/framework?
The main and best resource is tasks that you care about! Come on, don't you have a backlog of "N cool things I wish I could be doing (to make my life -- or somebody else's -- better)"?! I've had dozens and dozens of things on such backlogs ever since... well, as far back as I can recall. So pick one, and do it with the new language (or framework, or whatever) -- the fact that you care about the result, rather than it being an artificial task set by somebody else!, makes all the difference!
When learning a new language I usually build something that I've previously built in another language.
Try to rewrite some simple tools you wrote in another language, if it is possible. In this case you'll know what do you need to do it and what is the difference between those languages.
FizzBuzz ( this is for the req. )
Create something that the language you are learning is good at. For instance, if you're working with Ruby on Rails, make a data driven web application. If you're working with Perl, write something that does advanced text parsing. Most languages were created to solve a certain set of problems. Focusing on those problems allows you to learn the differentiating features of the language you are studying.
There are many puzzles and challenges you can take part in, such as Facebook Career Puzzles. These puzzles are for people who want to work for facebook, but there is no reason you can't try them even if you aren't interested in a job. They start from very simple, to complex.
Programming Puzzles for Interviews (you could write programs to solve these)
Programming Puzzles hosted by companies like this one are also fun.
I would say create something useful to help yourself, or try tackling some puzzles. In addition to learning new ways to tackle problems, you'll learn the language better.