What areas of specialization within programming would you recommend to a beginner [closed] - language-agnostic

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am a student studying software development, and I feel programming, in general, is too broad of a subject to try to know everything. To be proficient, you have to decide which areas to focus your learning and understanding. Certain skill sets synergize with each other, like data-driven web development and SQL experience. However, all the win32 API experience in the world may not directly apply to linux development. This leads me to believe, as a beginning programmer, I should start deciding where I want to specialize after I have general understanding of the basic principles of software development.
This is a multi-part question really:
What are the common specializations within computer programming and software development?
Which of these specializations have more long-term value, both as a foundation for other specializations and/or as marketable skills?
Which skill sets complement each other?
Are there any areas of specialization that hinder your ability of developing other areas of specialization.

Ben, Almost all seasoned programmers are still students in programming. You never stops learning anything when you are a developer. But if you are really starting off on your career then you should be least worried about the specialization thing. All APIs, frameworks and skills that you expect that gives you a long term existence in the field is not going to happen. Technology seems changing a lot and you should be versatile and flexible enough to learn anything. The knowledge you acquire on one platform/api/framework doesn't die off. You can apply the skills to the next greatest platform/api/framework.
That being said you should just stop worrying about the future and concentrate on the basics. DataStructures, Algorithm Analysis and Design, Compiler Design, Operating system design are the bare minimum stuff you need. And further you should be willing to go back and read tho books in those field any time in your career. Thats all is required. Good luck.
Sorry if I sounded like a big ass advisor; but thats what I think. :-)

Not to directly reject your premise but I actually think being a generalist is a good position in programming. You will certainly develop expertise in specific areas but it is likely to be a product of either personal interest or work necessity. Over time the stuff you are able to transfer across languages and problem domains is at the heart of what makes good programmers.

I think the more important question is: What areas of specialization are you most interested in?
Once you know, begin learning in that area!

I would think the greatest skill of all would be to adapt with the times, because if your employer can see this potential in you then they would be wise to hold on tightly.
That said, I would advise you dive into the area YOU would enjoy. Learning is driven by enthusiasm.
Since my current employ is with an internet provider, I've found networking knowledge particularly helpful. But someday I'd like to play with 3D graphics (not necessarily games).

Go as deep as you can starting off in one environment, win32, .net, Java, Objective C... whatever.
It is important to build the deep understanding of how X works... so that you can translate the same concepts into other languages or platforms/environments, if you so desire.
"Are there any areas of specialization that hinder your ability of developing other areas of specialization." Sort of, but nothing permanent i think.
Since I am relatively green myself (less than 4 years) I come from a really OOP mindset. I've rarely jumped out of .NET, so I had a hard time on one job when coming into contact with embedded code. With embedded programmers fearing object creation and the performance loss of inheritance. I had to learn the environment, seriously low memory and slow clock times, they were coming from. Those are times to grow, I had a better time at it because i understood my area pretty well.
I will say if you pick something to specialize in for marketability and money, you will probably burn out fast. If you do start to specialize pick something you enjoy. I love GUI programing and hate server side stuff, my buddy is the opposite, but we both love our jobs. If he had to do my job, and I his, we would both go insane out of boredom.

As a student I'd recommend forgetting about what you're programming and focusing on the software process itself. Understand how to analyse a problem and ask the right questions; learn every design pattern you can and actually apply them all to gain a real understanding and appreciation of object-oriented design; write tests and then code only as much as you need to in order to make the tests pass. I think the best way to really learn is to just code as much as you can - the language and the domain aren't important, browse sourceforge and freshmeat for any interesting-sounding projects and get involved. What's important is understanding the fundamentals of software engineering.
And yes, this includes C. Or Assembler. This is the easiest way to get a good understanding of how your computer works and what your high-level code is actually doing.
Finally, never stop learning - Service-oriented architecture, inversion of control, domain-specific languages, business process management are all showing huge benefits so they're important to be aware of - But by the time you finish studying and join the workforce who knows what the next big thing will be?

Related

Is OOP abused in universities? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I started my college two years ago, and since then I keep hearing "design your classes first". I really ask myself sometimes, should my solution to be a bunch of objects in the first place! Some say that you don't see its benefits because your codebase is very small - university projects. The project size excuse just don't go down my throat. If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project.
I am not saying OOP is bad, I just feel it is abused in classrooms where students like me are told day and night that OOP is the right way.
IMHO, the proper answer shouldn't come from a professor, I prefer to hear it from real engineers in the field.
Is OOP the right approach always?
When is OOP the best approach?
When is OOP a bad approach?
This is a very general question. I am not asking for definite answers, just some real design experience from the field.
I don't care about performance. I am asking about design. I know it is engineering in real life.
==================================================================================
Thankful for all contributions. I chose Nosredna answer, because she addressed my questions in general and convinced me that I was wrong about the following :
If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project.
The professors have the disadvantage that they can't put you on huge, nasty programs that go on for years, being worked on by many different programmers. They have to use rather unconvincing toy examples and try to trick you into seeing the bigger picture.
Essentially, they have to scare you into believing that when an HO gauge model train hits you, it'll tear your leg clean off. Only the most convincing profs can do it.
"If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project."
That's where I disagree. A small project fits into your brain. The large version of it might not. To me, the benefit of OO is hiding enough of the details so that the big picture can still be crammed into my head. If you lack OO, you can still manage, but it means finding other ways to hide the complexity.
Keep your eye on the real goal--producing reliable code. OO works well in large programs because it helps you manage complexity. It also can aid in reusability.
But OO isn't the goal. Good code is the goal. If a procedural approach works and never gets complex, you win!
OOP is a real world computer concept that the university would be derelict to leave out of the curriculum. When you apply for jobs, you will be expected to be conversant in it.
That being said, pace jalf, OOP was primarily designed as a way to manage complexity. University projects written by one or two students on homework time are not a realistic setting for large projects like this, so the examples feel (and are) toy examples.
Also, it is important to realize that not everyone really sees OOP the same way. Some see it about encapsulation, and make huge classes that are very complex, but hide their state from any outside caller. Others want to make sure that a given object is only responsible for doing one thing and make a lot of small classes. Some seek an object model that closely mirrors real world abstractions that the program is trying to relate to, others see the object model as about how to organize the technical architecture of the problem, rather than the real world business model. There is no one true way with OOP, but at its core it was introduced as a way of managing complexity and keeping larger programs more maintainable over time.
OOP is the right approach when your data can be well structured into objects.
For instance, for an embedded device that's processing an incoming stream of bytes from a sensor, there might not be much that can be clearly objectified.
Also in cases where ABSOLUTE control over performance is critical (when every cycle counts), an OOP approach can introduce costs that might be nontrivial to compute.
In the real world, most often, your problem can be VERY well described in terms of objects, although the law of leaky abstractions must not be forgotten!
Industry generally resolves, eventually, for the most part, to using the right tool for the job, and you can see OOP in many many places. Exceptions are often made for high-performance and low-level. Of course, there are no hard and fast rules.
You can hammer in a screw if you stick at it long enough...
My 5 cents:
OOP is just one instance of a larger pattern: dealing with complexity by breaking down a big problem into smaller ones. Our feeble minds are limited to a small number of ideas they can handle at any given time. Even a moderately sized commercial application has more moving parts than most folks can fully maintain a complete mental picture of at a time. Some of the more successful design paradigms in software engineering capitalize on the notion of dealing with complexity. Whether it's breaking your architecture into layers, your program into modules, doing a functional breakdown of actions, using pre-built components, leveraging independent web services, or identifying objects and classes in your problem and solution spaces. Those are all tools for taming the beast that is complexity.
OOP has been particularly successful in several classes of problems. It works well when you can think about the problem in terms of "things" and the interactions between them. It works quite well when you're dealing with data, with user interfaces, or building general purpose libraries. The prevalence of these classes of apps helped make OOP ubiquitous. Other classes of problems call for other or additional tools. Operating systems distinguish kernel and user spaces, and isolate processes in part to avoid the complexity creep. Functional programming keeps data immutable to avoid the mesh of dependencies that occur with multithreading. Neither is your classic OOP design and yet they are crucial and successful in their own domains.
In your career, you are likely to face problems and systems that are larger than you could tackle entirely on your own. Your teacher are not only trying to equip you with the present tools of the trade. They are trying to convey that there are patterns and tools available for you to use when you are attempting to model real world problems. It's in your best interest to accumulate a collection of tools for your toolbox and choose the right tool(s) for the job. OOP is a powerful tool to have, but by far not the only one.
No...OOP is not always the best approach.
(A true) OOP design is the best approach when your problem can best be modeled as a set of objects that can accomplish your goals by communicating/using one another.
Good question...but I'm guessing Scientific/Analytic applications are probably the best example. The majority of their problems can best be approached by functional programming rather than object oriented programming.
...that being said, let the flaming begin. I'm sure there are holes and I'd love to learn why.
Is OOP the right approach always?
Nope.
When OOP is the best approach?
When it helps you.
When OOP is a bad approach?
When it impedes you.
That's really as specific as it gets. Sometimes you don't need OOP, sometimes it's not available in the language you're using, sometimes it really doesn't make a difference.
I will say this though, when it comes to technique and best practices continue to double check what your professors tell you. Just because they're teachers doesn't mean they're experts.
It might be helpful to think of the P of OOP as Principles rather than Programming. Whether or not you represent every domain concept as an object, the main OO principles (encapsulation, abstraction, polymorphism) are all immensely useful at solving particular problems, especially as software gets more complex. It's more important to have maintainable code than to have represented everything in a "pure" object hierarchy.
My experience is that OOP is mostly useful on a small scale - defining a class with certain behavior, and which maintains a number of invariants. Then I essentially just use that as yet another datatype to use with generic or functional programming.
Trying to design an entire application solely in terms of OOP just leads to huge bloated class hierarchies, spaghetti code where everything is hidden behind 5 layers of indirection, and even the smallest, most trivial unit of work ends up taking three seconds to execute.
OOP is useful --- when combined with other approaches.
But ultimately, every program is about doing, not about being. And OOP is about "being". About expressing that "this is a car. The car has 4 wheels. The car is green".
It's not interesting to model a car in your application. It's interesting to model *the car doing stuff. Processes are what's interesting, and in a nutshell, they are what your program should be organized around. Individual classes are there to help you express what your processes should do (if you want to talk about car things, it's easier to have a car object than having to talk about all the individual components it is made up of, but the only reason you want to talk about the car at all is because of what is happening to it. The user is driving it, or selling it, or you are modelling what happens to it if someone hits it with a hammer)
So I prefer to think in terms of functions. Those functions might operate on objects, sure, but the functions are the ones my program is about. And they don't have to "belong" to any particular class.
Like most questions of this nature, the answer is "it depends."
Frederick P. Brooks said it the best in "The Mythical Man-Month" that "there is no single strategy, technique or trick that will exponentially raise the productivity of programmers." You wouldn't use a broad sword to make a surgical incision and you wouldn't use a scalpel in a sword fight.
There are amazing benefits to OOP, but you need to be comfortable with the pattern to take advantage of these benefits. Knowing and understanding OOP also allows you to create a cleaner procedural implementation for your solutions because of the underlying concepts of separation of concerns.
I've seen some of the best results of using OOP when adding new functionality to a system or maintaining/improving a system. Unfortunately, it's not easy to get that kind of experience while attending a university.
I have yet to work on a project in the industry that was not a combination of both functional and OOP. It really comes down to your requirements and what are the best (maybe cheapest?) solutions for them.
OOP is not always the best approach. However it is the best approach in the majority of applications.
OOP is the best approach in any system that lend itself to objects and the interaction of objects. Most business applications are best implemented in an object-oriented way.
OOP is a bad approach for small 1 off applications where the cost of developing an framework of objects would exceed the needs of the moment.
Learning OOA, OOD & OOP skills will benefit the most programmers, so it is definately useful for Universities to teach it.
The relevance and history of OOP runs back to the Simula languages back in the 1960s as a way to engineer software conceptually, where the developed code defines both the structure of the source and general permissible interactions with it. Obvious advantages are that a well-defined and well-created object is self-justifying and consistently repeatable as well as reliable; ideally also able to be extended and overridden.
The only time I know of that OOP is a 'bad approach' is during an embedded system programming efforts where resource availability is restricted; of course that's assuming your environment gives you access to them at all (as was already stated).
The title asks one question, and the post asks another. What do you want to know?
OOP is a major paradigm, and it gets major attention. If metaprogramming becomes huge, it will get more attention. Java and C# are two of the most used languages at the moment (see: SO tags by number of uses). I think it's ignorant to state either way that OOP is a great/terrible paradigm.
I think your question can best be summarized by the old adage: "When the hammer is your tool, everything looks like a nail."
OOP is usually an excellent approach, but it does come with a certain amount of overhead, at least conceptual. I don't do OO for small programs, for example. However, it's something you really do need to learn, so I can see requiring it for small programs in a University setting.
If I have to do serious planning, I'm going to use OOP. If not, I won't.
This is for the classes of problems I've been doing (which includes modeling, a few games, and a few random things). It may be different for other fields, but I don't have experience with them.
My opinion, freely offered, worth as much...
OOD/OOP is a tool. How good of a tool depends on the person using it, and how appropriate it is to use in a particular case depends on the problem. If I give you a saw, you'll know how to cut wood, but you won't necessarily be able to build a house.
The buzz that I'm picking up on is that functional programming is the wave of the future because it's extremely friendly to multi-threaded environments, so OO might be obsolete by the time you graduate. ;-)

What are some advanced software development topics every developer should know? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Let's say your company has given you the time & money to acquire training on as many advanced programming topics that you can eat in a year, carte blanche. What would those topics be and how would you prefer to acquire them?
Assumptions:
You're still having deliverables to bring into existence, but you're allowed one week per month for the year for this training.
The training can come from anywhere. IE: Classroom, on-site instructor, books, subscriptions, podcasts, etc.
Subject matter can cover any platform, technology, language, DBMS, toolset, etc.
Concurrent/Parallel programming and multi-threading, especially with respect to memory models and memory coherency.. I think every programmer should be aware of the considerations in this arena as we move into a world of multi-core/multi-cpu hardware.
For this I would probably using Internet research most heavily; but an on-campus primer at a good university could be a good way to start off.
Security!
Far too many programmers just build something and think they can add security as an afterthought after finishing the "main" part of the program. You could always benefit from knowing more about how to secure your app, how to design software to be secure from the get-go, how to do intrusion detection, etc.
Advanced Database Development
Things like data warehousing (MDX, OLAP queries, star schemas, fact tables, etc), advanced performance tuning, advanced schema and query patterns, and the like are always useful.
Here are the three that I'm always finding myself explaining to junior developers who didn't get enough CS training. All that other stuff is generally more hype than substance, or can be fairly easily picked up. But if you don't know these three, you can do a great deal of damage:
Algorithm analysis, including Big O
Notation.
The various levels of
cohesion and coupling.
Amdahl's Law, and how it pertains to optimizations.
Internationalization issues, especially since it sounds like it would not be an advanced topic. But it is.
Accessibility
It's ignored by so many organizations but the simple fact of the matter is that there are a huge number of people with low or no vision, color blindness, or other differences that can make navigating the web a very frustrating experience. If everybody had at least a little bit of training in it, we might get some web based UIs that are a little more inclusive.
Object oriented design patterns.
I guess "advanced" is different for everyone, but I'd suggest the following as being things that most decent developers (i.e. ones that don't need to be told about NP-completeness or design patterns) could gain from:
Multithreading techniques that go
beyond "lock" and when to apply them.
In-depth training to learn and
habitualize themselves with clever
features in their toolchain (IDE/text
editor, debugger, profiler, shell.)
Some cryptography theory and hands-on experience with different common flaws in security schemes that people create.
If they program against a database, learn the internals of their database and advanced
query composition and tuning techniques.
Developers should know the basics in SQL development and how their decisions impact database performance. It is one thing to write a query it is another thing to write a query, understand the explain plan and make design decisions based off that output. I think a good course on PL/SQL development and database performance would be very beneficial.
Unfortunately communication skills seem to fall under the "advanced topics" section for most developers (present company excluded, of course).
Best way to acquire this skill: practice.
Take of the headphones, and talk to
someone instead of IM'ing or emailing
the guy at the next desk.
Pick up the phone and talk to a
client instead of lobbing an email
over the fence.
Ask questions at a conference instead of sitting behind your laptop
screen twittering.
Actively participate in a non-technical meeting at work.
Present something in public.
Most projects do not fail because of technical reasons. They fail because they could not create a team. Communication is vital to team dynamics.
It will not harm your career either.
One of the best courses I took was a technical writing course. It has served me well in my career.
Additionally: it probably does not matter WHAT the topic is - the fact that the organization is interested in it and is paying for it and the developers want to go and do go, is a better indicator of success/improvement than any one particular topic.
I also don't think it matters that much what the topic is. Dev organizations deal with so many things during a project that training and then on the job implementation/trial and error will always get you some better perspective - even if the attempts to try out/use the new stuff fail. That experience will probably help more on the subsequent projects.
I'm a book person, so I wouldn't really bother with instruction.
Not necessarily in this order, and depending on what you know already
OO Programming
Functional Programming
Data structures and algorithms
Parallel processing
Set based logic (essentially the theory behind sql and how to apply it)
Building parsers (I only put this, because it actually came up where I work)
Software development methodolgoies
NP Completeness. Specifically, how to detect if a problem is NP-Complete, and how to build an approximate solution to the problem.
I see this as important because you don't want a developer to try and solve an NP-complete problem by getting the optimum solution, unless the problem's search space is very small, in which case brute force is acceptable. However, as the search space increases, the time required to solve the problem increases exponentially.
I'd cover new technologies and trends. Some of the new technologies I'm researching/enhancing my skills with include:
Microsoft .NET Framework v3.0/v3.5/v4.0
Cloud Computing Frameworks (Amazon EC2, Windows Azure Services, GoGrid, etc.)
Design Patterns
I am from MS based developer world, so here is my take on this
More about new concepts in Cloud Computing (various API etc.). as the industry is betting on it for sometime.
More about LinQ for .net framework
Distributed databases
Refactoring techniques (which implies also learning to write a good set of unit/functional tests).
Knowing how to refactor is the best way to keep code clean -- it is rare when you get it right the first time (especially in new designs).
A number of refactorings, however, require a decent set of tests to check that the refactoring did not add unexpected behavior.
Parallel computing- the easiest and best way to learn it
Debugging
Debugging by David J. Agans is a good book on the topic. Debugging can be very complex when you deal with multi threaded programs, crashes, algorithms that doesn't work. etc. Everybody would be better off being good at debugging.
I'd vote for real-world battle stories. Have developers from other organizations present their successes and failures. Don't limit the presentations to technologies you're using. With a significantly complex project, this is bound to cut into 'advanced' topics you haven't even considered. Real-world successes (and failures) have a lot to teach.
Go to the Stack Overflow DevDays
and the ACCU conferences
Read
Agile Software Development, Principles, Patterns, and Practices (Robert C. Martin)
Clean Code (Robert C. Martin)
The Pragmatic Programmer (Andrew Hunt&David Thomas)
Well if you're here I would hope by now you have the basics down:
OOP Best practices
Design patterns
Application Security
Database Security/Queries/Schemas
Most notably developers should strive to learn multiple programming languages and disciplines, in order for their skill set to be expanded in more than one direction. They don't need to become experts in these other skills but at least have a very acute understanding of integration with their central discipline. This will make them much better developers in the long run, and also let them gain the ability to use all tools at their disposal to create applications that can transcend the limitations of a singular language.
Outside of programming specific topics, you should also learn how to work under Agile, XP, or other team based methodologies in order to be more successful while working in a team environment.
I think an advanced programmer should know how to get your employer to give you the time & money to acquire training on as many advanced programming topics that you can eat in a year. I'm not advanced yet. :)
I'd suggest an Artificial Intelligence class at a college/university. Most of the stuff is fun, easy to grasp (the basics at least), and the solutions to problems are usually creative.
Hitchhikers Guide to the Galaxy.
How would I prefer to acquire the training? I'd love to have a substantial amount of company time dedicated to self-training.
I totally agree on Accessabiitly. I was asked to look into it for the website at work and there is a real lack of good knowledge on the subject, not only a lack of CSS standards to aid in the likes of screen readers.
However my answer goes to GUI design - its quite a difficult thing to get right. There's too many awful applications out there that could be prevented just by taking the time to follow HCI (Human Computer Interaction) advice/designs. Take Google/Apple for inspiration when making a GUI - not your typical hundreds of buttons/labels combo that too often gets pushed out.
Automated testing: Unit testing, functional integration testing, non-functional testing
Compiler details (more relevant on some platforms than others): How does the compiler implement certain common constructs in language X? On a byte-code interpreted platform, how does JIT compilation work? What can be JIT-compiled (for example, can virtual calls be JIT compiled?)?
Basic web security
Common design idioms from other problem domains than the one you're working in at the moment.
I'd recommend learning about Refactoring, Test Driven Development, and various unit testing frameworks (NUnit, Visual Test, CppUnit, etc.) I'd also learn how to incorporate automated unit testing into your continuous integration builds.
Ultimately if you can prove your code does what it claims it can do, you don't have to be there to answer questions as to why or how. If a maintainer comes along and tries to "fix" your code, they'll know instantly if they broke it. Tests written around the requirements (use cases) explain to the maintainer what your users wanted it to do, and provide a little working example of how to call it. Think of unit tests as functional documentation.
Test Driven Development (TDD) is a more novel design approach that begins with the requirements, where you start by writing a test before you write the code. You then write exactly enough code required to pass the test. You have to stop before you write extra code (that you may never need), because you will refactor it later if you find that you really needed it.
What makes TDD cool is that a bad interface (such as one with lots of dependencies) is also very hard to write tests for. It's so hard that a coder would rather refactor the interface to make it easier to test. And that refactoring simplifies the code, removing inappropriate dependencies, or grouping related tests together to make it easier to test, thus improving cohesion. By making it immediately apparent to the developer when he's writing a badly interfaced module, the developer sticks to the architecture and gravitates to the principles of tight cohesion and loose coupling. Good interfaces are the natural result. And as a bonus, once you pass all your tests, you know you're done.
On the surface this seems like an easy question to answer, just enter your favorite pet peeve about what other developers can't do correctly. But when I read through the answers and gave it some thought, I realized that every "advanced topic" brought up was covered in my undergraduate computer science curriculum--20 years ago. And I doubt that OO, security, functional programming, etc. concepts have changed in that time. Sure the tools have, but I argue that tools are different than topics.
So what is an "advanced topic" in computer science? Who is the Turing, Knuth, Yourdon of the 21st century?
I don't have a clear answer to this question, though I'd like to see more work on theories for parallel programming that will enable tools to abstract that messy stuff for developers.
Quite funny that noone hasnt mentioned:
debugging.
tools & ide you work with
and platform you are developing to.
Everyday development is much more fun if you know your tools really well and you accomplish more and make your life easier if you know how to debug someone elses code at ease.
Source Control

Development Cost of Procedural Programming vs. OOP?

I come from a fairly strong OO background, the benefits of OOD & OOP are second nature to me, but recently I've found myself in a development shop tied to a procedural programming habits. The implementation language has some OOP features, they are not used in optimal ways.
Update: everyone seems to have an opinion about this topic, as do I, but the question was:
Have there been any good comparative studies contrasting the cost of software development using procedural programming languages versus Object Oriented languages?
Some commenters have pointed out the dubious nature of trying to compare apples to oranges, and I agree that it would be very difficult to accurately measure, however not entirely impossible perhaps.
Most all of these questions are confounded by the problem that individual programmer productivity varies by an order of magnitude or more; if you happen to have an OO programmer who is one of the gruop at productivity x, and a "procedural" programmer who is a 10x programmer, the procedural programmer is liable to win even if OO is faster in some sense.
There's also the problem that coding productivity is usually only 10-20 percent of the total effort in a realistic project, so higher productivity doesn't have much impact; even that hypothetical 10x programmer, or an infinitely fast programmer, can't cut the overall effort by more that 10-20 percent.
You might have a look at Fred Brooks' paper "No Silver Bullet".
After poking around with google I found this paper here. The search terms I used are Productivity object oriented.
The opening paragraphs goes on to say
Introduction of object-oriented
technology does not appear to hinder
overall productivity on new large
commercial projects, but it neither
seems to improve it in the first two
product generations. In practice, the
governing influence may be the
business workflow and not the
methodology.
I think you will find that Object Oriented Programming is better in specific circumstances but neutral for everything else. What sold my bosses on converting my company's CAD/CAM application to a object oriented framework is that I precisely showed the exact areas in which it will help. The focus wasn't on the methodology as a whole but how it will help us sold some specific problem we had. For us was having a extensible framework for adding more shapes, reports, and machine controllers, and using collections to remove the memory limitation of the older design.
OO or procedural offer to different way to develop and both can be costly if badly managed.
If we suppose that the works are done by the best person in both case, I think the result might be equal in term of cost.
I believe the cost difference will be on how you will be the maintenance phase where you will need to add features and modify current features. Procedural project are harder to have automatic testing, are less subject to be able to expand without affecting other part and is more harder to understand the concept part by part (because cohesive part aren't grouped together necessary).
So, I think, the OO cost will be lower in the long run compared to Procedural.
i think S.Lott was referring to the "unrepeatable experiment" phenomenon, i.e. you cannot write application X procedurally then rewind time and write it OO to see what the difference is.
you could write the same app twice two different ways, but
you would learn something about the app doing it the first way that would help you in the second way, and
you may be better at OO than at procedural, or vice-versa, depending on your experience and the nature of the application and the tools chosen
so there really is no direct basis for comparison
empirical studies are likewise useless, for similar reasons - different applications, different teams, etc.
paradigm shifts are difficult, and a small percentage of programmers may never make the transition
if you are free to develop your way, then the solution is simple: develop things your way, and when your co-workers notice that you are coding circles around them and your code doesn't break nearly as often etc. and they ask you how you do it, then teach them OOP (along with TDD and any other good practices you may use)
if not, well, it might be time to polish the resume... ;-)
Good idea. A head-to-head comparison. Write application X in a procedural style, and in an OO style and measure something. Cost to develop. Return on Investment.
What does it mean to write the same application in two styles? It would be a different application, wouldn't it? The procedural people would balk that the OO folks were cheating when they used inheritance or messaging or encapsulation.
There can't be such a comparison. There's no basis for comparing two "versions" of an application. It's like asking if apples or oranges are more cost-effective at being fruit.
Having said that, you have to focus on things other folks can actually see.
Time to build something that works.
Rate of bugs and problems.
If your approach is better, you'll be successful, and people will want to know why.
When you explain that OO leads to your success... well... you've won the argument.
The key is time. How long does it take the company to change the design to add new features or fix existing ones. Any study you make should focus on that area.
My company had a event driven procedure oriented design for a CAM software in the mid 90's created using VB3. It was taking a long time to adapt the software to new machines. A long time to test the effects of bug fixes and new features.
With VB6 came along I was able to graph out the current design and a new design that fixed the testing and adaptation problem. The non-technical boss grasped what I was trying doing right away.
The key is to explain WHY OOP will benefit the project. Use things like Refactoring by Fowler and Design Patterns to show how a new design will lower the time to do things. Also include how you get from Point A to Point B. Refactoring will help with showing how you can have working intermediate stages that can be shipped.
I don't think you'll find a study like that. At least you should define what you mean by "cost". Because OOP designing is somehow slower, so on the short term development is maybe faster with procedural programming. On very short term maybe spaghetti coding is even more faster.
But when project begins growing things are opposite, because OOP designing is best featured to manage code complexity.
So in a small project maybe procedural design MAY be cheaper, because it's faster and you don't have drawbacks.
But in a big project you'll get stick very quickly using only a simple paradigm like procedural programming
I doubt you will find a definitive study. As several people have mentioned this is not a reproducible experiment. You will find anecdotal evidence, a lot of it. Some people may find some statistical studies, but I would examine them carefully. I am not aware of any really good ones.
I also will make another point, there is no such thing as purely object oriented or purely procedural in the real world. Many if not most object methods are written with procedural code. At the same time many procedural programs use OO methodologies such as encapsulation (also call abstraction by some).
Don't get me wrong, OO and procedural programs look and are different, but it is a matter of dark gray vs light gray instead of black and white.
This article says nothing about OOP vs Procedural. But I'd think that you could use similar metrics from your company for a discussion.
I find it interesting as my company is starting to explore the ROWE initiative. In our first session, it was apparent that we don't currently capture enough metrics on outcomes.
So you need to focus on 1) Is the maintenance of current processes impeding future development? 2) How are different methods going to affect #1?

Where is Reverse Engineering used? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I ask myself where reverse engineering is used. I'm interested at learning it. But I don't know if I can/should put it on my CV.
I don't want my new chief to think I am an evil Hacker or something. :)
So is it worth it?
Should I learn it or put my effort somewhere else?
Is there a good Book or tutorial out there? :)
Reverse engineering is commonly used for deciphering file formats for improving interoperability. For example, many popular commercial Windows applications don't run on Linux, which necessitates reverse engineering of files produced by those applications, so that they can be used in Linux. A good example of this would be the various formats supported by Gimp, OpenOffice, Inkscape, etc.
Another common use of reverse engineering is deciphering protocols. Good examples include Samba, DAAP support in many non-iTunes applications, cross platform IM clients like Pidgin, etc. For protocol reverse engineering, common tools of the trade include Wireshark and libpcap.
No doubt reverse engineering is often associated with software cracking, which is primarily understanding program disassembly. I can't say that I've ever needed to disassemble a program other than out of pure curiosity or to make it do something it wasn't. One plus side to reverse engineering programs is that to make any sense of it, you will need to learn assembly programming. There are however legal ways to hone your disassembly skills, specifically using Crackmes. An important point to be made is that when you're developing security measures in your applications, or if you're in that business, you need to know how reverse engineers operate to try to stay one step ahead.
IMHO, reverse engineering is a very powerful and useful skill to have. Not to mention, it's usually fun and addictive. Like hmemcpy mentioned, I'm not sure I would use the term "reverse engineering" on my CV, only the skills/knowledge associated with it.
Reverse engineering is usually something you do because you have to, not because you want to. For example, there are legal issues with simply reverse engineering a product! But there are necessary cases - where (for example) the supplier has gone and no longer exists or is not contactable. A good example would be the WMD editor that you typed your question into. The SO team/community had to reverse engineer this from obfuscated source to apply some bug fixes.
One of the fields, in my opinion, where reverse engineering skills might be useful is anti-virus industry, for instance. However, I wouldn't place "reverse engineering" on my CV, but rather I'd write down experience in the Assembly language, using miscellaneous disassemblers/debuggers (such as IDA, SoftIce or OllyDbg) and other relevant skills.
I have worked on reverse engineering projects, and they certainly had nothing to do with hacking. We had the source code for all such projects (legitimately), but for one of the projects nobody actually knew what the code did behind the scenes, and how it interacted with other systems. That information had long been lost. In another project, we had the source code and some documentation, but the documentation wasn't up to date, so we had to reverse-engineer the source to update the documentation.
I don't mind having such projects on my CV. In fact, I believe I've learned a lot during the process.
Reverse engineering is needed whenever the documentation is lost or it never existed. Having the source helps, but you still have to reverse engineer the original logic, flow control and bugs out of it.
Working with strange hardware often forces you to reverse engineer. For instance, I was once working with an old signal acquisition card that behaved strangely; putting in beautiful sine wave produced awfully crippled data. It turned out that every other byte was two's complement and every other one's complement - or at least, when interpreted that way, the data became quite beautiful. Of course, this wasn't documented anywhere, and the card worked perfectly when used with its own proprietary software.
It is very common (in my experience) to encounter older code which has defects, has become outdated due to changing requirements, or both. It's often the case that there's inadequate documentation, and the original developer(s) are no longer available. Reverse engineering that code to understand how it works (and sometimes to make a repair-or-replace decision) is an important skill.
If you have the source, it's often reasonable to do a small, carefully-planned, strictly-scoped amount of cleanup. (I'm hinting out loud that this can't be allowed to become a sinkhole for valuable developer time!)
It's also very helpful to be able to exercise the code in a testbed, either to verify that it does what was expected or to identify, document, isolate, and repair defects.
Doing so safely requires careful work. I highly recommend Michael Feathers' book Working with Legacy Code for its practical guidance in getting such code under test.
RCE is great skill for security guys (research, exploitation, IDS, IPS, AV etc.) but also it proves that you've got a deep and low level understanding of the subject.
Finding your way way around easier when working with 3rd party libraries as well.
If you are not working in security industry, if you are not good at ASM don't bother to learn it, generally it's hard to learn.
Books
Hacking the art of exploitation talks about the subject from security point of view.
Also you might want to read books about Ollydbg and IDA Pro

Suggestions on starting a child programming [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What languages and tools do you consider a youngster starting out in programming should use in the modern era?
Lots of us started with proprietary Basics and they didn't do all of us long term harm :) but given the experiences you have had since then and your knowledge of the domain now are there better options?
There are related queries to this one such as "Best ways to teach a beginner to program?" and "One piece of advice" about starting adults programming both of which I submitted answers to but children might require a different tool.
Disclosure: it's bloody hard choosing a 'correct' answer to a question like this so who ever has the best score in a few days will get the 'best answer' mark from me based on the communities choice.
I would suggest LEGO Mindstorm, it provides an intuitive drag and drop interface for programming and because it comes with hardware it provides something tangible for a child to grasp. Also, because it is "LEGO" they might think of it as more of a game then a programming exercise.
My day job is in a school, and over the past few years I've seen or taught (or attempted to teach) various children, in various numbers, programming lessons.
Children are all different - some are quick learners, some aren't. In particular, some have better literacy skills than others, and that definitely makes a difference to the speed at which they'll pick up programming. I bet that most of us here, as professional computer programmers and the kind of people who read and post to forums for fun, learnt to read at a pretty young age. For those kinds of children, and if it's your own child who you can teach one-on-one, you could do worse than JavaScript - it has the advantage that you can do real stuff with it right away, and the edit-test cycle is simply hitting "refresh" in the browser. It gets confusing when you start to run in to how JavaScript does everything asynchronously, and is tricky to debug, but for a bright child under close tuition these problems can be overcome.
LEGO Mindstorms is definitely up there at the top of the list. Most schools now super-glue the bricks together to create pre-made models that can't have bits nicked off of them, but this shouldn't be a problem at home. Over on the Times Educational Supplement site (website forum for the UK's weekly teaching newspaper), the "what programming language is best for children?" topic comes up pretty regularly. Lots of recommendations over there for Scratch as an alternative to Mindstorms - bit more freedom than Mindstorms, again probably better for the brighter student who could also be given a soldering iron.
I've found that slower pupils can still have problems with Mindstorms, even though the programming environment is "graphical" - there's still a lot going on on screen, and there's a fair bit to remember (this was an older version, mind - haven't tried the snazzy new one yet). In my experience, the best all-round introduction to programming is probably still LOGO - actually a considerably more powerful language than most people give it credit for. The original Mindstorms book by Seymour Papert (nothing to do with LEGO - they nicked the title of the book for their product), one of the originators of LOGO, is the canonical reference for teaching programming to children as a "thinking skill" and for the concept of Constructionism in learning.
We've had classes of 7 or 8 year-olds programming LOGO. Note that we aren't aiming to make them "software developers", that's a career path they can decide on at some point post-16. At a young age we're trying to get them to think of "computer programming" as just another tool - how to set out a problem to be solved by a computer, in the same way they might use a mind map to help them organise and remember stuff for an exam. No poor child should be sat down and drilled in the minutia and use of a particular language, they should be left to explore and figure stuff out as they like.
I'll second Geoff's suggestions of Phrogram (used to be KPL), and Alice.
My only other suggestion is Lego Mindstorms NXT. The NXT's programming language is drag-and-drop, is very easy to use, and can do some very complicated tasks once you learn it. Also young boys usually like seeing things move. :)
I've used Alice and NXTs with some young kids, and they've taken to it very well.
Two possibilities are:
Scratch - developed at MIT - http://scratch.mit.edu/
and
EToys from the One Laptop per Child fame - http://wiki.laptop.org/go/Squeak
Full disclosure: I'm one of the guys who invented Kid's Programming Language, which is now http://www.Phrogram.com, which others have recommended here. Let me add some programmer-oriented info about it.
It's a code IDE, rather than drag-and-drop, or designer-based. This was intentional on our part - we wanted to make it easy and fun to do real text-based programming, particularly programming games and graphics. This is a fundamental difference between us and Alice and Scratch. Which you pick is a matter of the kid, their age and aptitudes, your goals. Using them serially with the same beginner might be a great way to go - if you do that, I would recommend Scratch, Alice, Phrogram as the order. Phrogram has worked best for 12 years and up, but I know dads with 6 year olds who have taught their kids with it, and I know 10 year olds who have taught themselves with it.
The language is as much like English as we could make it, and is as minimal as we could make it. The secret sauce is in the class-based object heirarchy, which is again as simple, intuitive and English-like as we could make it. The object heirarchy is optimized for games and graphics. 3D models are available, and 2D sprites. Absolute movement using screen coordinates is supported, or relative movement ala LOGO turtles - Forward(x), TurnLeft(y).
The IDE comes with over 100 examples, some language examples (loops), some learning examples (arrays), some fully-functional games and sims (Pong, Missile Command, Game of Life).
To give you a sense of how highly leveraged we made the language and the IDE: with 27 instructions you can fly a 3D spaceship model around a 3D skybox, using your keyboard. The same with a 2D sprite is 12 to 15 instructions.
We are working on a Blade-compatible release of Phrogram that will allow programs to run on the XBox 360. Yeah, the XBox, on your big TV. Nice motivator for getting a kid started? :)
Phrogram includes support for class-based programming, with methods and properties - but that's only encapsulation, not inheritance or polymorphism.
A tutorial and user guide is available,
My own ebook is available at Amazon and other places online, "Learn to Program with Phrogram!," and gets a beginner started by programming the classic Pong.
Phrogram Programming for the Absolute Beginner, by Jerry Lee Ford, Jr., is also available, as a paperback, at Amazon and elsewhere.
For a child, I would go with Alice. Any kid is going to like the drag-and-drop interaction that Alice uses better than trying to remember how to spell and punctuate any programming language. He/She will learn the basic programming structures (conditionals, loops, etc.) and will experience the fun of building an animated program they can show off to other family or friends.
A beginner CS class at the local community college actually uses Alice to teach programming in a language-independent way. It provides a good foundation for moving into programming in a particular language (or a few languages) down the road.
I recently saw a presentation about GreenFoot (a java based learning environment for children). It looked awesome. If I would have kids, I would give it a try
Link to the presentation
It is a very playful environment, where you could start with very basic methods. The kids learn thinking in an object oriented way (you cannot instantiate an animal, but you can instantiate a cat). And the better they get, the more of Java you can uncover for/with them.
I'd go with Scratch, some points regarding it.
It's a graphical programming language. It isn't text based (this might be
positive or negative). It does make it more intuitive and easy for kids (7 and
up).
It's actually highly object. The objects you write these graphical scripts have the code attached to them and can be reused and moved around.
Very Important: quick and impressive results. Kids need to get going fast and get results in order to get hooked.
I'd like to note that although many of us started programing at a young age in basic or logo and because programmer later in life doesn't mean those are good languages to start with. I think that kids today have much better options, like scratch or Alice.
Text based languages (python, ruby, basic, c# or even c) are dependent on external libraries and tools (editors, compilers) while something like Alice or scratch is all inclusive and will teach kids (not aimed at teens) programming concepts. Later they can move on and expand their learning.
Check out Phrogram (formerly KPL) and Alice
I'd say: give the kid a real C64, because that's how I got started. But, today... I'd say Ruby, but Ruby is a bit too chaotic. BASIC would be better in the long run. Processing is easy to learn, and it's basically Java.
The reason I recommend a C64 is because it's BASIC, but you still have to learn certain computer-related things, like the memory model, pixels, characters, character maps, newlines, etc. etc, if you want to do more advanced stuff. Also, if your kid finds it boring, you know his heart really isn't into coding.
I would pitch LOGO. It was something that was taught in my elementary school. It gives nearly immediate feedback, and will teach really basic programming concepts. Moving that little turtle around can be a lot of fun.
For a child, I would go with Alice.
Here is another vote for Alice. My 4 kids have had a ton of fun working with it and learning the basic concepts of programming. Of course to them it's all about socializing with fairies and ogres, but heck the darn legacy system I work on could use some faries and ogres too.
I'd recommend python, because it's so terse and expressive. Seems less likely to frustrate when getting started, but offers plenty of room to learn more advanced concepts as well.
Game Maker might be another approach. You can start simple with easy drag and drop development, and then introduce more advanced programming as you go. The book The Game Maker's Apprentice: Game Development for Beginners has a number of sample games and takes you through the steps required to make them.
I think python is a good alternative; it is a very powerful language also you can easily do a lot of things (not boring at all).
Checkout Squeak developed by Alan Kay who think programming should be taught at early ages.
How old? Lots of us stared with BASIC at some point, but before then, I learned the concepts of stringing commands together, variables, and looping with LOGO. Figuring out how to draw a circle with a triangle that can only go in a straight line and turn was my very first programming accomplishment.
Edit: This question & its answers make me feel old.
Though _why hasn't given it much love in the past year or so, for a while I was really excited about Hackety Hack. I think the key for most new programmers, especially children who are more than apt to losing interest in things, is instantaneous feedback. That was the really wonderful thing about Hackety Hack: a few lines of code, and suddenly you have something in front of you that does something. There are a few similar applications aimed at things like drawing graphics (one of which, I briefly assisted Nathan Weizenbaum on, Scribble!). Kids simply need positive feedback that they're doing something correct on a regular basis, else there's nothing to keep them interested in the task at hand. What I think the future is for teaching children to program is some sort of DSL built on top of a language with friendly syntax (these would include, arguably, Ruby, Python, and Scheme) whose purpose is to provide an intuitive environment for constructing simple games (say, Tic-Tac Toe, or Hangman).
I think you should start them off in C. The sooner they can get the hang of pointers the better.
See Understanding Pointers and Should I learn C.
I think the first question is: what sort of program would it be interesting to create? One of the things that got me started with programming as a kid (in BBC basic and then QBasic) was the ease of writing graphical programs. I could write a couple of lines of code and see my program draw a line on the screen straight away.
The closest I've seen to that sort of simplicity recently are the pygame library for python and Processing, a set of java libraries with an IDE.
I imagine that hacking on web pages would be another good way to get started: that would entail HTML, Javascript (using a library like jQuery), perhaps PHP or something along those lines.
Whatever tools you provide, the crucial thing is for it to be easy to get started straight away. If you have to write twenty lines of correct code and figure out how to invoke the compiler before you see any tangible results, progress is going to be slow.
There are many good suggestions here already. I really agree with Kronikarz. Get a retro computer (or emulator) that you are interested in and teach with that. Why a retro computer? Basic is built in. Making sounds and primitive graphics is a trivial task. The real deal might be better than an emulator because it will be a bit more fascinating to a child who is used to seeing only modern devices.
As I said here, I'd go for Squeakland and the famous Drive a Car example (powered by Squeak).
Smalltalk syntax is simple, which is great for children.
And later as the child evolves, he can learn more complex and even very advanced concepts that are also in Squeak (eg. programing statefull webapps with automated refactoring and automated unit tests!).
And like #cpuguru and #Rotem said, Scratch (also Squeak based) is great too.
I think Java might be a good choice simply because you can make GUIs easily, and see "cool things" happening. For the same reason, maybe any of the .NET languages. I've also heard good things about scripting languages (Ruby and Python, especially) for getting kids to learn how to program.
Well, if they're young and haven't learnt their ABC's you could try them on BF - non of those pesky letters and numbers to deal with.
I'll get me' coat.
Skizz
I would go with what I wish I had known first: a simple MS-DOS box and the integrated assembler (debug). It is great to really learn and understand the basics of talking to a computer.
If that does not scare away a child, then I would go the "next level up" and introduce C. This shouldn't be hard given that the basic concept of pointers, registers and instructions in general are well-understood by then.
However, I am not entirely sure, where to go next. Take the big jump to Lisp, Haskell or similarly abstracted languages or should there be some simple object oriented languages (maybe even C++) be thrown in or would that more hurt than help?
Looking at Alice, I see it is "designed for high school and college students". There appears to be another language/version called Story Telling Alice that is "designed for middle-school students"
Alice Download Page
I think Context Free Art might be a good choice, with output of graphics, it makes it a lot of fun learning about context-free grammar.
Try [Guido van Robot][1]. It's an excellent introduction to robotics, and it's a great way to introduce kids to the programming side of things (vs the "building the robots" side).
Wasn't Smalltalk designed for such a purpose? I think Ruby would be a good choice, as a descendant of Smalltalk.
I know in the first few years of high school we were 'taught' Logo, and strangely, HTML. After that, the progression went to macros in MS Office, followed by basic VBA, followed by Visual Basic.