How large a role does subjectiveness play in programming? - language-agnostic

I often read about the importance of readability and maintainability. Or, I read very strong opinions about which syntax features are bad or good. Or discussions about the values of certain paradigms, like OOP.
Aside from that, this same question floats about in my mind whenever I read debates on SO or Meta about subjective questions. Or read questions about best practices and sometimes find myself or others disagreeing.
What role does subjectiveness play within the programming realm?
Sometimes I think it plays a large role. Software developers are engineers in a way, but also people. A large part of programming is dealing with code that's human readable. This is very different from Math or Physics or other disciplines with very exact and structured rules. Here the exact structure and rules are largely up in the air, changeable on a whim, and hence the amount of languages in existence. And one person may find one language very readable, and another person may find their own language the most comforting.
The same with practices. One person may not like certain accepted practices. I myself find splitting classes into different files very unreadable, for instance.
But, I can't say rules haven't helped in general. Certain practices have and do make life easier. And new languages have given rise to syntax and structure that make life easier. There's certainly been a progression towards code that is easier to read and maintain even given a largely diverse group of people. So maybe these things aren't as subjective as I thought.
It reminds me, in a way, of UI design. Certainly it's subjective, but then there's an entire discipline involved in crafting good UI and it tends to work.
Is there something non-subjective about the ideas behind maintainability, readability, and other best practices? Is there something tangible to grasp when one develops a new language or thinks of new practices?

Arguably your question is really about the distinction between programming, which is mathematical, algorithmic and scientific, and software engineering, which is subjective, variable and human-focused.
Great programmers are not necessarily great software engineers, and vice versa. The two skillsets, while not exclusive by any means, have less overlap than they appear at first. Their relative importance depends a lot on the project: a brilliant programmer working alone can turn out amazing examples of technical genius, and it doesn't matter that nobody else can understand or maintain it, because he's not going to share the code anyway. But move into an enterprise environment -- like corporate in-house software development -- and I'll gladly trade you ten "cave troll" geniuses for a mediocre programmer who understands the importance of readability and documentation.
It's been my experience that the world needs great software engineers more than it needs great programmers. Relatively few people in this day and age are writing software which is truly performance-critical (OS kernels, compilers, graphics engines, realtime embedded systems, etc), and the Internet allows mediocre programmers to quickly grab algorithmic solutions for problems they couldn't solve alone. But nearly everyone writing professional code has to work within a team. And team productivity rises and falls dramatically on the ability of its members to communicate effectively and distribute workload efficiently, two skills which are highly subjective and impossible to prove by rigid formula.
Most software engineering principles are built on experience rather than objective law. Much like the social sciences, we study, learn, adapt and apply -- but with no real guarantees of outcome. All we can say is that some things seem to work better than others in most groups.

I think, a lot of it is necessarily determined by how much our mind is able to process at one time. So it comes down to how much the language and tools enable a team or a developer to break down the problem into chunks that are meaningful by themselves, but not so large that it becomes too hard to grasp them. The common theme is the art of organizing information (in this case, the code, the logic, ...) But that's not so different from Maths or Physics, by the way.

Just as the best authors borrow from many styles, the best programmers keep a huge range of patterns in their mental arsenal. Slavishly following a few patterns and adhering to some absolute truth is both lazy and dangerous.
Put it another way, the day we rely on robots for code review is the day I quit.

It all depends on your point of view :-)
But to answer your questions, I think one way to view subjectivity is to recognize that software languages, tools, and best practices are a shared means of communication among individuals. Yes, a programming language is a formal way of instructing a computer how to behave, but a programming language may also be viewed as a way to define and communicate specifications to a high level of detail (the code is the ultimate spec, is it not?).
So as far as we may want to concern ourselves with the degree of subjectivity in software languages, tools, and best practices, I would say that the lack of subjectivity may indicate how well communication is facilitated.
Yes, individuals have certain proclivities that are expressed in their habits and tendencies, but that should not ultimately matter too much in the perfect platform for development.

Turning to my Maths PhD wife I asked if there's any subjectivity in mathematics. Her answer is yes there is, mainly in the way we as humans achieve the answer.
If a mathematical proof is the result, how you get to that result can vary. If the dataset is large you may need to use a computer, which can introduce errors, and thus debated about whether that is the right approach. Or sometimes mathematicians can disagree on the theory - one is trying to prove that x is true while the other is trying to prove that x is false.
I think the same thing exists in computer science. A correct answer is a program that runs correctly, but that definition of correct may be different for each project. Sometimes correct means no bugs. Sometimes it means running efficiently.
From here programmers can argue how best to achieve the "correct" result. A good example of this is is the FizzBuzz application. A simple answer would be just a for loop, but Enterprise FizzBuzz is also "correct" in that it produces the correct answer, but is generally laughed at as "bad" engineering due to its overcomplication of the idea (it was a joke app after all).
How large a role does subjectiveness play in programming? I'd say it's a very large part of what we do, simply because we are human, and because there are multiple ways of getting the "correct" answer so there is disagreement over which way is the best.

Studies have been done showing that certain practices reduce defect rates in software. For instance, a study found a strong correlation between cyclomatic complexity and the probability of being fault-prone. Other studies show the average effectiveness of design and code inspections are 55 and 60 percent. So it appears to be in our best interests to favor simplicity, check metrics, and do code reviews.
We're talking probabilities here, though. If I review your code, I'm not guaranteed to find 60% of your bugs. There are also few absolutes in software development; experienced developers know that the correct answer is generally "it depends." That said, there are a number of practices with objective data in their favor.

Related

Does knowing a Natural Language well help with Programming?

We all hear that math at least helps a little bit with programming. My question though, does English or other natural language skills help with programming? I know it has to help with technical documentation, but what about actual programming? Are certain constructs in a programming language also there in natural languages? Does knowing how to write a 20 page research paper help with writing a 20k loc programming project?
Dijkstra went so far as to say: "Besides a mathematical inclination, an exceptionally good mastery of one's native tongue is the most vital asset of a competent programmer."
Edit: yes, I'm reasonably certain he was talking about the programming part of the job. Here's a bit more complete quote:
The problems of business administration in general and database management in particular are much too difficult for people who think in IBMerese, compounded by sloppy English.
About the use of language: it is impossible to sharpen a pencil with a blunt axe. It is equally vain to try to do it with ten blunt axes instead.
Besides a mathematical inclination, an exceptionally good mastery of one's native tongue is the most vital asset of a competent programmer.
From EWD498.
I certainly can't speak for Dijkstra, but I think it's impossible to cleanly separate the part where you're doing actual programming from the part where you're interacting with people. Just for example, even when you're working alone, it's crucial that you're able to understand (clearly and unambiguously) notes you wrote down about what to do, the nature of a bug, etc. A good command of English is necessary even when nobody else is involved at all (and, of course, that's unusual except on trivial tasks).
I don't know about causality, but the skill set required to write well overlaps quite a bit with those required for programming: knowing how to plan, being able to keep a myriad of details consistent, being able to make things clear for a future reader, knowing how to organize your thoughts and the resultant product. That isn't to say that a successful author would make a good programmer, but a programmer with good language skills and the same logic/math/deductive skills is probably a better programmer than one with poor language skills -- at least the code has a greater chance of being understandable.
Yes. Strong natural language skills help you to organize your thoughts in a coherent way that can easily be understood by others. That can help improve your code in everything from naming variables, methods, classes, etc., to expressing the contexts of objects in your model. Practices such as pair programming require you to be able to communicate well with your partner in order to write good code. Techniques such as Domain Driving Design emphasize using the domain language of the business in your code. Natural language skills facilitate that. And there is a strong drive in the development industry toward more natural language-like tools, e.g. many of the newer testing tools like rspec, gherkin, etc., are moving toward more natural language-like syntax. One of the things many people like about dynamic languages like Ruby and Python are that the code tends to read more like a natural language.
Let me state what should be the obvious: every healthy person above 12 knows at least one natural language. Moreover, every healthy person above 12 is able to generate and parse natural language a complex and rich language, and express and understand an extremely large set of ideas. In general, people are not likely to be limited in their ability to discuss issues by their language, but by the type of things they experienced and learned.
Having said that, there are several language-related skills that you might have thought about.
Writing style. You mentioned those specifically. Written language is different from spoken language. Way less intuitive. This is one reason people have to get coached in writing through their years in the education system.
Coding doesn't really involve writing. I mean, there's comments, but they can be rather laconic. Of course the work of a programmer usually involves at least some writing of documents, and writing abilities to make a difference there.
Analytical skills. Analytical skills are a complicated (not to say fuzzy) concept. Analytical skills aren't really about language, but insomuch they are taught and tested at all, it's in the context of writing essays.
Analytical skills are obviously very important in programming. I am not sure that these are exactly the same skills required to write a good essay about Euthanasia or whatever, but as was previously suggested, they may be related.
Foreign language. For people whose native language isn't English, a certain command of English may be needed. Not in the coding itself (knowing what "while" means in English isn't really critical to understanding what it does in Java), but because much training and support material is available mainly in English (did anyone mention Stack Overflow?). The English requirement may differ on the country you are in, and the company you work for, though.
Communication Skills. Ahhm. I was never exactly sure what this means exactly. Maybe it's a cultural thing. I do suspect it's less about knowing a language and more about knowing people.
So to some up, Dijkstra is a venerable computer scientist, but I am not sure he knew that much about language.
Programming isn't just about writing code. On any programming project of any size there will be the need for:
initial project proposal documents
design and architectural documents
programmers manual
users manual
training materials
communication with third party suppliers
etc.
On every big project I've worked on I'd guess I spent at least 50% of my time on the English language documents. So yes, an ability to explain and express yourself well is extremely important. Does it lead to writing better code? Once again, I would say yes - the need to provide clear documentation spills over into the need to write better code, itnerfaces et al.

Is OOP abused in universities? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I started my college two years ago, and since then I keep hearing "design your classes first". I really ask myself sometimes, should my solution to be a bunch of objects in the first place! Some say that you don't see its benefits because your codebase is very small - university projects. The project size excuse just don't go down my throat. If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project.
I am not saying OOP is bad, I just feel it is abused in classrooms where students like me are told day and night that OOP is the right way.
IMHO, the proper answer shouldn't come from a professor, I prefer to hear it from real engineers in the field.
Is OOP the right approach always?
When is OOP the best approach?
When is OOP a bad approach?
This is a very general question. I am not asking for definite answers, just some real design experience from the field.
I don't care about performance. I am asking about design. I know it is engineering in real life.
==================================================================================
Thankful for all contributions. I chose Nosredna answer, because she addressed my questions in general and convinced me that I was wrong about the following :
If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project.
The professors have the disadvantage that they can't put you on huge, nasty programs that go on for years, being worked on by many different programmers. They have to use rather unconvincing toy examples and try to trick you into seeing the bigger picture.
Essentially, they have to scare you into believing that when an HO gauge model train hits you, it'll tear your leg clean off. Only the most convincing profs can do it.
"If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project."
That's where I disagree. A small project fits into your brain. The large version of it might not. To me, the benefit of OO is hiding enough of the details so that the big picture can still be crammed into my head. If you lack OO, you can still manage, but it means finding other ways to hide the complexity.
Keep your eye on the real goal--producing reliable code. OO works well in large programs because it helps you manage complexity. It also can aid in reusability.
But OO isn't the goal. Good code is the goal. If a procedural approach works and never gets complex, you win!
OOP is a real world computer concept that the university would be derelict to leave out of the curriculum. When you apply for jobs, you will be expected to be conversant in it.
That being said, pace jalf, OOP was primarily designed as a way to manage complexity. University projects written by one or two students on homework time are not a realistic setting for large projects like this, so the examples feel (and are) toy examples.
Also, it is important to realize that not everyone really sees OOP the same way. Some see it about encapsulation, and make huge classes that are very complex, but hide their state from any outside caller. Others want to make sure that a given object is only responsible for doing one thing and make a lot of small classes. Some seek an object model that closely mirrors real world abstractions that the program is trying to relate to, others see the object model as about how to organize the technical architecture of the problem, rather than the real world business model. There is no one true way with OOP, but at its core it was introduced as a way of managing complexity and keeping larger programs more maintainable over time.
OOP is the right approach when your data can be well structured into objects.
For instance, for an embedded device that's processing an incoming stream of bytes from a sensor, there might not be much that can be clearly objectified.
Also in cases where ABSOLUTE control over performance is critical (when every cycle counts), an OOP approach can introduce costs that might be nontrivial to compute.
In the real world, most often, your problem can be VERY well described in terms of objects, although the law of leaky abstractions must not be forgotten!
Industry generally resolves, eventually, for the most part, to using the right tool for the job, and you can see OOP in many many places. Exceptions are often made for high-performance and low-level. Of course, there are no hard and fast rules.
You can hammer in a screw if you stick at it long enough...
My 5 cents:
OOP is just one instance of a larger pattern: dealing with complexity by breaking down a big problem into smaller ones. Our feeble minds are limited to a small number of ideas they can handle at any given time. Even a moderately sized commercial application has more moving parts than most folks can fully maintain a complete mental picture of at a time. Some of the more successful design paradigms in software engineering capitalize on the notion of dealing with complexity. Whether it's breaking your architecture into layers, your program into modules, doing a functional breakdown of actions, using pre-built components, leveraging independent web services, or identifying objects and classes in your problem and solution spaces. Those are all tools for taming the beast that is complexity.
OOP has been particularly successful in several classes of problems. It works well when you can think about the problem in terms of "things" and the interactions between them. It works quite well when you're dealing with data, with user interfaces, or building general purpose libraries. The prevalence of these classes of apps helped make OOP ubiquitous. Other classes of problems call for other or additional tools. Operating systems distinguish kernel and user spaces, and isolate processes in part to avoid the complexity creep. Functional programming keeps data immutable to avoid the mesh of dependencies that occur with multithreading. Neither is your classic OOP design and yet they are crucial and successful in their own domains.
In your career, you are likely to face problems and systems that are larger than you could tackle entirely on your own. Your teacher are not only trying to equip you with the present tools of the trade. They are trying to convey that there are patterns and tools available for you to use when you are attempting to model real world problems. It's in your best interest to accumulate a collection of tools for your toolbox and choose the right tool(s) for the job. OOP is a powerful tool to have, but by far not the only one.
No...OOP is not always the best approach.
(A true) OOP design is the best approach when your problem can best be modeled as a set of objects that can accomplish your goals by communicating/using one another.
Good question...but I'm guessing Scientific/Analytic applications are probably the best example. The majority of their problems can best be approached by functional programming rather than object oriented programming.
...that being said, let the flaming begin. I'm sure there are holes and I'd love to learn why.
Is OOP the right approach always?
Nope.
When OOP is the best approach?
When it helps you.
When OOP is a bad approach?
When it impedes you.
That's really as specific as it gets. Sometimes you don't need OOP, sometimes it's not available in the language you're using, sometimes it really doesn't make a difference.
I will say this though, when it comes to technique and best practices continue to double check what your professors tell you. Just because they're teachers doesn't mean they're experts.
It might be helpful to think of the P of OOP as Principles rather than Programming. Whether or not you represent every domain concept as an object, the main OO principles (encapsulation, abstraction, polymorphism) are all immensely useful at solving particular problems, especially as software gets more complex. It's more important to have maintainable code than to have represented everything in a "pure" object hierarchy.
My experience is that OOP is mostly useful on a small scale - defining a class with certain behavior, and which maintains a number of invariants. Then I essentially just use that as yet another datatype to use with generic or functional programming.
Trying to design an entire application solely in terms of OOP just leads to huge bloated class hierarchies, spaghetti code where everything is hidden behind 5 layers of indirection, and even the smallest, most trivial unit of work ends up taking three seconds to execute.
OOP is useful --- when combined with other approaches.
But ultimately, every program is about doing, not about being. And OOP is about "being". About expressing that "this is a car. The car has 4 wheels. The car is green".
It's not interesting to model a car in your application. It's interesting to model *the car doing stuff. Processes are what's interesting, and in a nutshell, they are what your program should be organized around. Individual classes are there to help you express what your processes should do (if you want to talk about car things, it's easier to have a car object than having to talk about all the individual components it is made up of, but the only reason you want to talk about the car at all is because of what is happening to it. The user is driving it, or selling it, or you are modelling what happens to it if someone hits it with a hammer)
So I prefer to think in terms of functions. Those functions might operate on objects, sure, but the functions are the ones my program is about. And they don't have to "belong" to any particular class.
Like most questions of this nature, the answer is "it depends."
Frederick P. Brooks said it the best in "The Mythical Man-Month" that "there is no single strategy, technique or trick that will exponentially raise the productivity of programmers." You wouldn't use a broad sword to make a surgical incision and you wouldn't use a scalpel in a sword fight.
There are amazing benefits to OOP, but you need to be comfortable with the pattern to take advantage of these benefits. Knowing and understanding OOP also allows you to create a cleaner procedural implementation for your solutions because of the underlying concepts of separation of concerns.
I've seen some of the best results of using OOP when adding new functionality to a system or maintaining/improving a system. Unfortunately, it's not easy to get that kind of experience while attending a university.
I have yet to work on a project in the industry that was not a combination of both functional and OOP. It really comes down to your requirements and what are the best (maybe cheapest?) solutions for them.
OOP is not always the best approach. However it is the best approach in the majority of applications.
OOP is the best approach in any system that lend itself to objects and the interaction of objects. Most business applications are best implemented in an object-oriented way.
OOP is a bad approach for small 1 off applications where the cost of developing an framework of objects would exceed the needs of the moment.
Learning OOA, OOD & OOP skills will benefit the most programmers, so it is definately useful for Universities to teach it.
The relevance and history of OOP runs back to the Simula languages back in the 1960s as a way to engineer software conceptually, where the developed code defines both the structure of the source and general permissible interactions with it. Obvious advantages are that a well-defined and well-created object is self-justifying and consistently repeatable as well as reliable; ideally also able to be extended and overridden.
The only time I know of that OOP is a 'bad approach' is during an embedded system programming efforts where resource availability is restricted; of course that's assuming your environment gives you access to them at all (as was already stated).
The title asks one question, and the post asks another. What do you want to know?
OOP is a major paradigm, and it gets major attention. If metaprogramming becomes huge, it will get more attention. Java and C# are two of the most used languages at the moment (see: SO tags by number of uses). I think it's ignorant to state either way that OOP is a great/terrible paradigm.
I think your question can best be summarized by the old adage: "When the hammer is your tool, everything looks like a nail."
OOP is usually an excellent approach, but it does come with a certain amount of overhead, at least conceptual. I don't do OO for small programs, for example. However, it's something you really do need to learn, so I can see requiring it for small programs in a University setting.
If I have to do serious planning, I'm going to use OOP. If not, I won't.
This is for the classes of problems I've been doing (which includes modeling, a few games, and a few random things). It may be different for other fields, but I don't have experience with them.
My opinion, freely offered, worth as much...
OOD/OOP is a tool. How good of a tool depends on the person using it, and how appropriate it is to use in a particular case depends on the problem. If I give you a saw, you'll know how to cut wood, but you won't necessarily be able to build a house.
The buzz that I'm picking up on is that functional programming is the wave of the future because it's extremely friendly to multi-threaded environments, so OO might be obsolete by the time you graduate. ;-)

What are some advanced software development topics every developer should know? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Let's say your company has given you the time & money to acquire training on as many advanced programming topics that you can eat in a year, carte blanche. What would those topics be and how would you prefer to acquire them?
Assumptions:
You're still having deliverables to bring into existence, but you're allowed one week per month for the year for this training.
The training can come from anywhere. IE: Classroom, on-site instructor, books, subscriptions, podcasts, etc.
Subject matter can cover any platform, technology, language, DBMS, toolset, etc.
Concurrent/Parallel programming and multi-threading, especially with respect to memory models and memory coherency.. I think every programmer should be aware of the considerations in this arena as we move into a world of multi-core/multi-cpu hardware.
For this I would probably using Internet research most heavily; but an on-campus primer at a good university could be a good way to start off.
Security!
Far too many programmers just build something and think they can add security as an afterthought after finishing the "main" part of the program. You could always benefit from knowing more about how to secure your app, how to design software to be secure from the get-go, how to do intrusion detection, etc.
Advanced Database Development
Things like data warehousing (MDX, OLAP queries, star schemas, fact tables, etc), advanced performance tuning, advanced schema and query patterns, and the like are always useful.
Here are the three that I'm always finding myself explaining to junior developers who didn't get enough CS training. All that other stuff is generally more hype than substance, or can be fairly easily picked up. But if you don't know these three, you can do a great deal of damage:
Algorithm analysis, including Big O
Notation.
The various levels of
cohesion and coupling.
Amdahl's Law, and how it pertains to optimizations.
Internationalization issues, especially since it sounds like it would not be an advanced topic. But it is.
Accessibility
It's ignored by so many organizations but the simple fact of the matter is that there are a huge number of people with low or no vision, color blindness, or other differences that can make navigating the web a very frustrating experience. If everybody had at least a little bit of training in it, we might get some web based UIs that are a little more inclusive.
Object oriented design patterns.
I guess "advanced" is different for everyone, but I'd suggest the following as being things that most decent developers (i.e. ones that don't need to be told about NP-completeness or design patterns) could gain from:
Multithreading techniques that go
beyond "lock" and when to apply them.
In-depth training to learn and
habitualize themselves with clever
features in their toolchain (IDE/text
editor, debugger, profiler, shell.)
Some cryptography theory and hands-on experience with different common flaws in security schemes that people create.
If they program against a database, learn the internals of their database and advanced
query composition and tuning techniques.
Developers should know the basics in SQL development and how their decisions impact database performance. It is one thing to write a query it is another thing to write a query, understand the explain plan and make design decisions based off that output. I think a good course on PL/SQL development and database performance would be very beneficial.
Unfortunately communication skills seem to fall under the "advanced topics" section for most developers (present company excluded, of course).
Best way to acquire this skill: practice.
Take of the headphones, and talk to
someone instead of IM'ing or emailing
the guy at the next desk.
Pick up the phone and talk to a
client instead of lobbing an email
over the fence.
Ask questions at a conference instead of sitting behind your laptop
screen twittering.
Actively participate in a non-technical meeting at work.
Present something in public.
Most projects do not fail because of technical reasons. They fail because they could not create a team. Communication is vital to team dynamics.
It will not harm your career either.
One of the best courses I took was a technical writing course. It has served me well in my career.
Additionally: it probably does not matter WHAT the topic is - the fact that the organization is interested in it and is paying for it and the developers want to go and do go, is a better indicator of success/improvement than any one particular topic.
I also don't think it matters that much what the topic is. Dev organizations deal with so many things during a project that training and then on the job implementation/trial and error will always get you some better perspective - even if the attempts to try out/use the new stuff fail. That experience will probably help more on the subsequent projects.
I'm a book person, so I wouldn't really bother with instruction.
Not necessarily in this order, and depending on what you know already
OO Programming
Functional Programming
Data structures and algorithms
Parallel processing
Set based logic (essentially the theory behind sql and how to apply it)
Building parsers (I only put this, because it actually came up where I work)
Software development methodolgoies
NP Completeness. Specifically, how to detect if a problem is NP-Complete, and how to build an approximate solution to the problem.
I see this as important because you don't want a developer to try and solve an NP-complete problem by getting the optimum solution, unless the problem's search space is very small, in which case brute force is acceptable. However, as the search space increases, the time required to solve the problem increases exponentially.
I'd cover new technologies and trends. Some of the new technologies I'm researching/enhancing my skills with include:
Microsoft .NET Framework v3.0/v3.5/v4.0
Cloud Computing Frameworks (Amazon EC2, Windows Azure Services, GoGrid, etc.)
Design Patterns
I am from MS based developer world, so here is my take on this
More about new concepts in Cloud Computing (various API etc.). as the industry is betting on it for sometime.
More about LinQ for .net framework
Distributed databases
Refactoring techniques (which implies also learning to write a good set of unit/functional tests).
Knowing how to refactor is the best way to keep code clean -- it is rare when you get it right the first time (especially in new designs).
A number of refactorings, however, require a decent set of tests to check that the refactoring did not add unexpected behavior.
Parallel computing- the easiest and best way to learn it
Debugging
Debugging by David J. Agans is a good book on the topic. Debugging can be very complex when you deal with multi threaded programs, crashes, algorithms that doesn't work. etc. Everybody would be better off being good at debugging.
I'd vote for real-world battle stories. Have developers from other organizations present their successes and failures. Don't limit the presentations to technologies you're using. With a significantly complex project, this is bound to cut into 'advanced' topics you haven't even considered. Real-world successes (and failures) have a lot to teach.
Go to the Stack Overflow DevDays
and the ACCU conferences
Read
Agile Software Development, Principles, Patterns, and Practices (Robert C. Martin)
Clean Code (Robert C. Martin)
The Pragmatic Programmer (Andrew Hunt&David Thomas)
Well if you're here I would hope by now you have the basics down:
OOP Best practices
Design patterns
Application Security
Database Security/Queries/Schemas
Most notably developers should strive to learn multiple programming languages and disciplines, in order for their skill set to be expanded in more than one direction. They don't need to become experts in these other skills but at least have a very acute understanding of integration with their central discipline. This will make them much better developers in the long run, and also let them gain the ability to use all tools at their disposal to create applications that can transcend the limitations of a singular language.
Outside of programming specific topics, you should also learn how to work under Agile, XP, or other team based methodologies in order to be more successful while working in a team environment.
I think an advanced programmer should know how to get your employer to give you the time & money to acquire training on as many advanced programming topics that you can eat in a year. I'm not advanced yet. :)
I'd suggest an Artificial Intelligence class at a college/university. Most of the stuff is fun, easy to grasp (the basics at least), and the solutions to problems are usually creative.
Hitchhikers Guide to the Galaxy.
How would I prefer to acquire the training? I'd love to have a substantial amount of company time dedicated to self-training.
I totally agree on Accessabiitly. I was asked to look into it for the website at work and there is a real lack of good knowledge on the subject, not only a lack of CSS standards to aid in the likes of screen readers.
However my answer goes to GUI design - its quite a difficult thing to get right. There's too many awful applications out there that could be prevented just by taking the time to follow HCI (Human Computer Interaction) advice/designs. Take Google/Apple for inspiration when making a GUI - not your typical hundreds of buttons/labels combo that too often gets pushed out.
Automated testing: Unit testing, functional integration testing, non-functional testing
Compiler details (more relevant on some platforms than others): How does the compiler implement certain common constructs in language X? On a byte-code interpreted platform, how does JIT compilation work? What can be JIT-compiled (for example, can virtual calls be JIT compiled?)?
Basic web security
Common design idioms from other problem domains than the one you're working in at the moment.
I'd recommend learning about Refactoring, Test Driven Development, and various unit testing frameworks (NUnit, Visual Test, CppUnit, etc.) I'd also learn how to incorporate automated unit testing into your continuous integration builds.
Ultimately if you can prove your code does what it claims it can do, you don't have to be there to answer questions as to why or how. If a maintainer comes along and tries to "fix" your code, they'll know instantly if they broke it. Tests written around the requirements (use cases) explain to the maintainer what your users wanted it to do, and provide a little working example of how to call it. Think of unit tests as functional documentation.
Test Driven Development (TDD) is a more novel design approach that begins with the requirements, where you start by writing a test before you write the code. You then write exactly enough code required to pass the test. You have to stop before you write extra code (that you may never need), because you will refactor it later if you find that you really needed it.
What makes TDD cool is that a bad interface (such as one with lots of dependencies) is also very hard to write tests for. It's so hard that a coder would rather refactor the interface to make it easier to test. And that refactoring simplifies the code, removing inappropriate dependencies, or grouping related tests together to make it easier to test, thus improving cohesion. By making it immediately apparent to the developer when he's writing a badly interfaced module, the developer sticks to the architecture and gravitates to the principles of tight cohesion and loose coupling. Good interfaces are the natural result. And as a bonus, once you pass all your tests, you know you're done.
On the surface this seems like an easy question to answer, just enter your favorite pet peeve about what other developers can't do correctly. But when I read through the answers and gave it some thought, I realized that every "advanced topic" brought up was covered in my undergraduate computer science curriculum--20 years ago. And I doubt that OO, security, functional programming, etc. concepts have changed in that time. Sure the tools have, but I argue that tools are different than topics.
So what is an "advanced topic" in computer science? Who is the Turing, Knuth, Yourdon of the 21st century?
I don't have a clear answer to this question, though I'd like to see more work on theories for parallel programming that will enable tools to abstract that messy stuff for developers.
Quite funny that noone hasnt mentioned:
debugging.
tools & ide you work with
and platform you are developing to.
Everyday development is much more fun if you know your tools really well and you accomplish more and make your life easier if you know how to debug someone elses code at ease.
Source Control

Consequences of doing "good enough" software

Does doing "good enough" software take anything from you being a programmer?
Here are my thoughts on this:
Well Joel Spolsky from JoelOnSoftware says that programmers gets bored because they do "good enough" (software that satisfies the requirements even though they are not that optimized). I agree, because people like to do things that are right all the way. On one side of the spectra, I want to go as far as:
Optimizing software in such a way as I can apply all my knowledge in Math and Computer Science I acquired in college as much as possible.
Do all of the possible software development process say: get specs from a repository, generate the code, build, test, deploy complete with manuals in a single automated build step.
On the other hand, a trait to us human is that we like variety. In order to us to maintain attraction (love programming), we need to jump from one project or technology to the other in order for us to not get bored and have "fun".
I would like your opinion if there is any good or bad side effects in doing "good enough" software to you as a programmer or human being?
I actually consider good-enough programmers to be better than the blue-sky-make-sure-everything-is-perfect variety.
That's because, although I'm a coder, I'm also a businessman and realize that programs are not for the satisfaction of programmers, they're to meet a specific business need.
I actually had an argument in another question regarding the best way to detect a won tic-tac-toe/noughts-and-crosses game (an interview question).
The best solution that I'd received was from a candidate that simply checked all 8 possibilities with if statements. There were some that gave a generalized solution which, while workable, were totally unnecessary since the specs were quite clear it was for a 3x3 board only.
Many people thought I was being too restrictive and the "winning" solution was rubbish but my opinion is that it's not the job of a programmer to write perfect beautifully-extendable software. It's their job to meet a business need.
If that business need allows them the freedom to do more than necessary, that's fine, but most software and fixes are delivered under time and cost constraints. Programmers (or any profession) don't work in a vacuum.
As a programmer I want to write excellent software that's defect-free. I'm not particularly interested in gold-plating, the act of adding unnecessary features that "improve" the software, though we all do it to a certain extent. In that sense, I'm satisfied with "good enough" software, if by good enough you mean that I've done what the customer asked and, at the same time, crafted it well and ensured that it is high quality.
What bothers me is when I take short-cuts and write crappy, untested code. I hate writing code that is buggy or where I've failed to refactor it into a better design as I've gone along. when I let a lot of technical debt creep in -- getting too busy writing new features instead of consistently improving old features as I'm adding new ones -- then I know that eventually I'll have something that, while the customer may be happy with it, I won't be.
Fortunately, in my workplace, management knows the value of keeping the code clean and I know the value of not obsessing over the elusive goal of perfection. No code is ever perfect, but "good enough" has to mean that the code is well-crafted. I've learned, and am still learning, to be happy with code that meets the customer's requirements and that the best feature is the one that doesn't need to be implemented. Fortunately, I have enough work to do that dropping features because they're not needed is a good thing.
In my experience, "good enough" always includes hacks, sloppiness, bad commenting, and spaghetti hell, thus leads to lack of scalability, bugs, lagginess, and prevents others from being able to build effectively on your work.
Pax, while I recognize your points about business needs and pragmatism, doing things "by the book" is for the business side. "Good enough for now" and "just get something working right quick" always leads to far more work-hours later on fixing everything, or downright redoing it when it comes to that, than would be spent doing it right the first time. "The book" was written for a reason.
IMO there is a big difference between "good enough" and crappy code. For me "good enough" is all about satisfying the requirements (both functional and non functional). I think it is dangerous for people to assume that "good enough" means taking short cuts or not optimzing code. If the non functional requirements call for optimized code then that is part of my definition of "good enough".
The key to your question is how one defines "good". To a business person, "good" software is software that solves the business need. In that case it is more about insuring that the specifications were well understood and properly implimented. The business person may very well not care if the program is not as fast or memory efficient as it could be.
Think about the commercial software you use, is it perfect? I really don't know anyone, including my friends at Microsoft, who would argue that the code in Windows is "perfect" or anything close to it. But it is undenyable that Windows is (and always has been) "good enough" to get millions of people to use it on a daily basis.
This issue goes back long before programming. I'm sure you have heard "If it ain't broke, don't fix it" or the original in French "Le mieux est l'ennemi du bien." It may have been Voltaire that wrote about the "good being the enemy of the great".
ANd consider what would happen if hiring managers decided to stop hiring "good" programmers and insisted that every applicant had a perfect 4.0 average in college, I for one would never have gotten a job as a programmer ;-)
So for me it is a case of do the best you can given the time and budget constraints. With more time and or more money I could always do better.
"Good enough" is in the eye of the beholder. Far too often, "good enough" is the refuge of incompetent people who write something which creates the impression of satisfying the requirements of a job. My "good enough" is unlikely to be the same as their "good enough".
Ultimately, everything we do must involve trade-offs. Some people will make the wrong trade-offs and deliver crappy software and some people will make the wrong trade-offs and fail to deliver. Rare are the ones who can make the right trade-offs and deliver software that really is good enough.
There are at least two aspects of quality that we have to take into account:
software quality: does the software meet the desired goals/requirements? do we deliver builds which have critical bugs? is it easy for end users to operate?
code quality: how hard is it to maintain the code? is it easy to implement new features?
If you're building a productized software, I think it is good to assume that it's never good enough in both aspects. Every little feature counts and if the users will not find what they need or the product is not stable enough, they will take a look at the competition. You also want to implement new features as quickly as possible, so that you have a competitive advantage in the market.
The situation gets interesting if you're building custom business software, where the end users and decision makers are usually not the same people, then the features/quality/money trade off becomes part of the negotiation process. What we usually do is we put "good enough" constraint on these three aspects: we have a set of requirements to meet, a quality to maintain and usually not enough time to keep both.
What is usually forgotten in this process is the second point: code quality or maintainability. We, programmers understand that sooner or latter crappy code will take its revenge and result in critical bugs or maintance costs. Decision makers don't. The problem is, the responsibility and risks are taken by you (your company, your division etc.) and you will be first to blame if something goes wrong.
My opinion is: for software quality do what the client tells you to do, they know best which features are critical for them, how many buggy the software can be etc. For code quality and maintainability: do as best as you can, learn to do more and teach others to do the same. This is where I get the fun from.
Depends what you mean by "good enough". I can see some risk at the design level, if you make it good enough you may find maintaining and extending your applicataion painful.
I think of programming as an Art. An art that requires efficiency. Is efficient code incompatible with beautiful code ? I doubt that. In fact, i think that when you solve a problem creatively it may mean multiplied performance. I don't think that programming should only be about learning a new libraries for each new needs, nor about bug tracking and fixing. I think it should be about beauty. Of course code cannot be always art, and sometimes one should be pragmatic about the encountered problems.

Development Cost of Procedural Programming vs. OOP?

I come from a fairly strong OO background, the benefits of OOD & OOP are second nature to me, but recently I've found myself in a development shop tied to a procedural programming habits. The implementation language has some OOP features, they are not used in optimal ways.
Update: everyone seems to have an opinion about this topic, as do I, but the question was:
Have there been any good comparative studies contrasting the cost of software development using procedural programming languages versus Object Oriented languages?
Some commenters have pointed out the dubious nature of trying to compare apples to oranges, and I agree that it would be very difficult to accurately measure, however not entirely impossible perhaps.
Most all of these questions are confounded by the problem that individual programmer productivity varies by an order of magnitude or more; if you happen to have an OO programmer who is one of the gruop at productivity x, and a "procedural" programmer who is a 10x programmer, the procedural programmer is liable to win even if OO is faster in some sense.
There's also the problem that coding productivity is usually only 10-20 percent of the total effort in a realistic project, so higher productivity doesn't have much impact; even that hypothetical 10x programmer, or an infinitely fast programmer, can't cut the overall effort by more that 10-20 percent.
You might have a look at Fred Brooks' paper "No Silver Bullet".
After poking around with google I found this paper here. The search terms I used are Productivity object oriented.
The opening paragraphs goes on to say
Introduction of object-oriented
technology does not appear to hinder
overall productivity on new large
commercial projects, but it neither
seems to improve it in the first two
product generations. In practice, the
governing influence may be the
business workflow and not the
methodology.
I think you will find that Object Oriented Programming is better in specific circumstances but neutral for everything else. What sold my bosses on converting my company's CAD/CAM application to a object oriented framework is that I precisely showed the exact areas in which it will help. The focus wasn't on the methodology as a whole but how it will help us sold some specific problem we had. For us was having a extensible framework for adding more shapes, reports, and machine controllers, and using collections to remove the memory limitation of the older design.
OO or procedural offer to different way to develop and both can be costly if badly managed.
If we suppose that the works are done by the best person in both case, I think the result might be equal in term of cost.
I believe the cost difference will be on how you will be the maintenance phase where you will need to add features and modify current features. Procedural project are harder to have automatic testing, are less subject to be able to expand without affecting other part and is more harder to understand the concept part by part (because cohesive part aren't grouped together necessary).
So, I think, the OO cost will be lower in the long run compared to Procedural.
i think S.Lott was referring to the "unrepeatable experiment" phenomenon, i.e. you cannot write application X procedurally then rewind time and write it OO to see what the difference is.
you could write the same app twice two different ways, but
you would learn something about the app doing it the first way that would help you in the second way, and
you may be better at OO than at procedural, or vice-versa, depending on your experience and the nature of the application and the tools chosen
so there really is no direct basis for comparison
empirical studies are likewise useless, for similar reasons - different applications, different teams, etc.
paradigm shifts are difficult, and a small percentage of programmers may never make the transition
if you are free to develop your way, then the solution is simple: develop things your way, and when your co-workers notice that you are coding circles around them and your code doesn't break nearly as often etc. and they ask you how you do it, then teach them OOP (along with TDD and any other good practices you may use)
if not, well, it might be time to polish the resume... ;-)
Good idea. A head-to-head comparison. Write application X in a procedural style, and in an OO style and measure something. Cost to develop. Return on Investment.
What does it mean to write the same application in two styles? It would be a different application, wouldn't it? The procedural people would balk that the OO folks were cheating when they used inheritance or messaging or encapsulation.
There can't be such a comparison. There's no basis for comparing two "versions" of an application. It's like asking if apples or oranges are more cost-effective at being fruit.
Having said that, you have to focus on things other folks can actually see.
Time to build something that works.
Rate of bugs and problems.
If your approach is better, you'll be successful, and people will want to know why.
When you explain that OO leads to your success... well... you've won the argument.
The key is time. How long does it take the company to change the design to add new features or fix existing ones. Any study you make should focus on that area.
My company had a event driven procedure oriented design for a CAM software in the mid 90's created using VB3. It was taking a long time to adapt the software to new machines. A long time to test the effects of bug fixes and new features.
With VB6 came along I was able to graph out the current design and a new design that fixed the testing and adaptation problem. The non-technical boss grasped what I was trying doing right away.
The key is to explain WHY OOP will benefit the project. Use things like Refactoring by Fowler and Design Patterns to show how a new design will lower the time to do things. Also include how you get from Point A to Point B. Refactoring will help with showing how you can have working intermediate stages that can be shipped.
I don't think you'll find a study like that. At least you should define what you mean by "cost". Because OOP designing is somehow slower, so on the short term development is maybe faster with procedural programming. On very short term maybe spaghetti coding is even more faster.
But when project begins growing things are opposite, because OOP designing is best featured to manage code complexity.
So in a small project maybe procedural design MAY be cheaper, because it's faster and you don't have drawbacks.
But in a big project you'll get stick very quickly using only a simple paradigm like procedural programming
I doubt you will find a definitive study. As several people have mentioned this is not a reproducible experiment. You will find anecdotal evidence, a lot of it. Some people may find some statistical studies, but I would examine them carefully. I am not aware of any really good ones.
I also will make another point, there is no such thing as purely object oriented or purely procedural in the real world. Many if not most object methods are written with procedural code. At the same time many procedural programs use OO methodologies such as encapsulation (also call abstraction by some).
Don't get me wrong, OO and procedural programs look and are different, but it is a matter of dark gray vs light gray instead of black and white.
This article says nothing about OOP vs Procedural. But I'd think that you could use similar metrics from your company for a discussion.
I find it interesting as my company is starting to explore the ROWE initiative. In our first session, it was apparent that we don't currently capture enough metrics on outcomes.
So you need to focus on 1) Is the maintenance of current processes impeding future development? 2) How are different methods going to affect #1?