Does pair programming work when there is a skills impedance mismatch? - language-agnostic

For example, can an experienced coder with limited C#.NET experience be successfully paired with an experienced C#.NET coder with the secondary aim of getting the former up to speed with C#.NET?

Absolutely. Sharing knowledge is one of the points of pair-programming (along with the useful dynamic of having one person type for a bit and the other review as they do it).
In my experience, it's one of the most effective ways of doing so - and allows the less experienced coder still to usefully contribute (it takes less experience to review what an expert is doing and make sensible comments/interventions than to do the entire job).

That depends on the personal chemistry between them. If the more experienced programmer is willing and able to share his knowledge, and let the less experienced programmer participate in the development through writing code and discussions, I would say that it is a very efficient way of learning.

Yes, I find good pair programming is always two way, it's essentially a piece of social engineering masquerading as an IT innovation.

Yes, this will work. If 1) the programmer with limited experience is receptive to learning C# and 2) the other programmer is willing to teach C#.

When the skill mismatch is high, then it does become more of a teacher/student relationship. This isn't bad, but can waste time of the skilled person.
However, even if it's impractical or a waste, it can be very useful to have a very occasional pair session! Even if the student is overwhelmed or it's awkward, sometimes it's useful for "students" to see how people of top level work and think. It helps give them an idea of the problems/skills/methods of high quality work. Think of it as a high school student visiting a research laboratory. It's a waste for the pro scientists to teach the high school student, but the 1 hour visit can help focus the student and give them a glimpse of the ultimate goals...
I remember why I chose Emacs as my editor. I just happened to sit near an expert user, and literally rudely peering over his shoulder I watched him rearrange and navigate code super-quickly. I only watched for less than a minute and I never talked to him.. he may have not even noticed I was watching! But I was floored, and decided to learn Emacs. Ten years later I still don't have as much skill as that expert, but I don't regret my decision to change editors, since I got a glimpse of what was possible.

Personally, I think that would work well and is one of the goals of pair-programming but how successfully will depend on the two programmers. If programmer 1 (the one learning C#) was putting in some extra time to truly get up to speed and programmer 2 (the other one) has the patience and desire to teach it should be good for both.

You can certainly do this - we've done it in the past. But you have to accept that you trade off the "code quality" benefits against training benefits. There's no free training ride, I'm afraid.

It works to some extent. Usually it's one leading the other... so it's not much pair programming in that sense.
It depends heavily on the experienced coder's skill to teach and the other coder's skill to learn quickly.

Yes, but only if the better person is patient and willing to teach and the worse person is willing to learn. I've pair programmed with people not as good as me and it was tedious, but I think they learnt from it. I've pair programmed with people that are better than me and I certainly learnt from it. Depends on the people really.

It can be effective with the following caveat: You must switch partners.
I've actually been in this situation and, if the gap is large, it can be very taxing for both members of the pair. Best to switch partners after a few hours, with the time varying according to your tolerance and the size of the gap. If that option isn't available, mix in some solo programming.

There's a saying that a team's strength is as good as its weakest link. Pairing the strongest one with the weakest one has traditionally been the best strategy because the weakest learning from the strongest potentially ensures most amount of learning. If there is a worry of the strongest being uninterested, then replace the strongest with someone who'd really be the strongest.

It all depends on the personality of the developers there is no hard and fast rule.
One thing that is certain is that the experienced developer will be less productive when working with an inexperienced developer. I personally think there needs to be a good match of abilities when pair programming. It is however a very good way of getting inexperience developers up to speed.

Although its a good idea but practically it may not be useful. To train somebody you can organize training and assign mentor who can help and guide. The mentor can assign work from the real project and may monitor.
Pair programming should be between relatively experienced people, if you want to get the benefits of this concept. In my view pair programming with an inexperienced person will have loss of productivity and not sure how much the person will pickup when somebody is constantly checking on him. Assigning a task and giving chance to develop it independently and then review it will provide good self learning.

Yes, but the approach to make it effective may not be clear at first. The task that is being pair programmed on should be the task of the less experienced programmer (We will call him Michael). I would also have Michael start the pair programming session so as to explain what the objective of the session is. This approach puts Michael in the drivers seat where the more experienced programmer (We will call him Bill) will serve more of a mentoring role.
Typically Bill will either take or be given more complex tasks to work on. This approach allows Michael to work on tasks that are more suited to his experience level. I would recommend switching off at 30 minute to hour intervals at first such that Michael can get used to the process of giving someone else control. You can slowly shorten these switch offs to 15 minute intervals or whatever works best for the two developers.

I think that the final results you get depend on the guys that are doing this. In this case, you'd probably end up with one leading the other (and where the other is just paying attention to understand the language features the first one is using).

It depends on how much skills impedance mismatch we're talking about.
Two good programmers in different languages can grow-up quickly with it, obviously at the cost of a little slowdown of the one expert in the current project's language.
If the difference is too big (for example, a veteran and a rookie), instead, it might be preferable to start with some other kind of approach, to avoid the risk of getting highly counterproductive.
Always beware the extreme pair programming !

Related

Disagreement on software time estimation

How do you deal with a client who has different time estimates for the software product than yours?
I am going to describe a scenario that is not mine, but that captures broadly the same problem. I am working as a subcontractor to a large company that has a programming department. The software project we are working on is in an area that the department believe they have a handle on, but because their expertise and mine are very different we tend to get different results.
Example: At the start of the project I suggested one way of development which they rubbished as being unrealistically difficult and suggested integrating a different framework (one they are familiar with) with the programming language we are using (Python) to get more or less the same result.
Their estimate for this integration: less than a week (they haven't done the integration before).
My estimate for the integration: above two weeks.
Using my suggested way to get the result needed (including using matplotlib among other libraries used elsewhere within the project): 45 minutes. This is not an estimate, the bit was actually finished in 45 minutes.
Example: for the software to be integrated with their internal system, they needed to provide a web service for me to use. They provided a broken one, though it does work with their internal tool (doesn't work with .Net or Java mainstream packages among other options). They maintain that it is my fault that the integration has taken longer than the time estimated.
The problem is not that they don't know, the problem is that they have enough knowledge about programming to be dangerous (in my opinion). Is there some guidelines for how to deal with this type of situation? A way for expectation management? Or may be I shouldn't get involved in such projects from the start and in this case what are the telltale signs?
If a client isn't happy with a time estimate, don't do the work. If they think they can do it better or faster, tell them to go ahead.
The one thing I never allow is for my estimates to be modified. That's something that caught me out early on in my career but we learn our lessons.
If clients were so good at doing the work, they wouldn't be hiring me. I'd simply point out that they hired me for my expertise so why are they disregarding that expertise. Of course, if they were to allow the scope of the project to change (i.e., less work), that would be another matter, and one up for discussion.
If you didn't lock in exactly what they were meant to provide as part of the deal, then it's a "he says, she says" situation and, unfortunately, the customer controls the purse strings. However, often, the greatest power you can have is the ability to just walk away.
No-one says you have to do the job.
Of course, all that advice above is worth every cent you paid for it :-)
I don't know your specific circumstances.
Or may be I shouldn't get involved in such projects from the start and in this case what are the telltale signs?
My answer for sure. If you can avoid those projects, do it.
Some signs : people thinking they know how to do things when you can guess they can't. The "oh no let's not use this perfectly suitable tool because I don't know it" is a major indicator that the person is technically challenged.
first of all, it is no fun to be in such an environment.
So, if you like to have fun at your job, and you do not need to take this job for extenuating financial reasons, then simply do not take the job that is not fun.
Since that is hardly realistic in many cases, you will end up with the job and need to manage the situation as best you can. One way is to make sure there is a paper trail documenting your objections and concerns with the plan. Try not to be overtly negative, but try to be constructive and present valid alternatives. Here you will need to feel out the political landscape, determine if the 'boss' will be appreciative or threatened by your commentary, and act accordingly.
Many times there are other issues that management is dealing with that you are not aware of. Be cautious of this fact, and maybe ask the management team if this is the case, again without being condescending or negative.
Finally, if you have alternatives that take less time than the meetings it would take to discuss them, just try it in a sandbox, and show it off. This would go a long way to 'proving' your points. Caution here is that you could be accused of not being a team player, or of wasting resources, or not following direction. Make sure this is mitigated by doing these types of things on your own time, or after careful consideration of how long you are spending on these things as well as how vested your boss seems to be on the alternatives.
hth
I ran into the same problem with integration. Example: for the
software to be integrated with their internal system, they needed to
provide a web service for me to use...They maintain that it is my
fault that the integration has taken longer than the time estimated.
Wow very similar to what I was experiencing with a client. The best thing I can suggest is to keep good documentation. In the end that is what saved me. When it came to finger pointing I had all of the emails and facts in order and was prepared to defend my self.
One thing I would suggest is to separate out a target/goal and an estimation. I would not change my estimate unless it involved actually removing features or something is revealed that would make it easier. Tell them you will try to hit the target in anyway you can and you care about the business goal. However, your estimate will not change. If its getting no where and they are just dense then smile and nod and take it if its the only gig around.
Was just writing about this in my blog
How to estimate the WRONG way

How large a role does subjectiveness play in programming?

I often read about the importance of readability and maintainability. Or, I read very strong opinions about which syntax features are bad or good. Or discussions about the values of certain paradigms, like OOP.
Aside from that, this same question floats about in my mind whenever I read debates on SO or Meta about subjective questions. Or read questions about best practices and sometimes find myself or others disagreeing.
What role does subjectiveness play within the programming realm?
Sometimes I think it plays a large role. Software developers are engineers in a way, but also people. A large part of programming is dealing with code that's human readable. This is very different from Math or Physics or other disciplines with very exact and structured rules. Here the exact structure and rules are largely up in the air, changeable on a whim, and hence the amount of languages in existence. And one person may find one language very readable, and another person may find their own language the most comforting.
The same with practices. One person may not like certain accepted practices. I myself find splitting classes into different files very unreadable, for instance.
But, I can't say rules haven't helped in general. Certain practices have and do make life easier. And new languages have given rise to syntax and structure that make life easier. There's certainly been a progression towards code that is easier to read and maintain even given a largely diverse group of people. So maybe these things aren't as subjective as I thought.
It reminds me, in a way, of UI design. Certainly it's subjective, but then there's an entire discipline involved in crafting good UI and it tends to work.
Is there something non-subjective about the ideas behind maintainability, readability, and other best practices? Is there something tangible to grasp when one develops a new language or thinks of new practices?
Arguably your question is really about the distinction between programming, which is mathematical, algorithmic and scientific, and software engineering, which is subjective, variable and human-focused.
Great programmers are not necessarily great software engineers, and vice versa. The two skillsets, while not exclusive by any means, have less overlap than they appear at first. Their relative importance depends a lot on the project: a brilliant programmer working alone can turn out amazing examples of technical genius, and it doesn't matter that nobody else can understand or maintain it, because he's not going to share the code anyway. But move into an enterprise environment -- like corporate in-house software development -- and I'll gladly trade you ten "cave troll" geniuses for a mediocre programmer who understands the importance of readability and documentation.
It's been my experience that the world needs great software engineers more than it needs great programmers. Relatively few people in this day and age are writing software which is truly performance-critical (OS kernels, compilers, graphics engines, realtime embedded systems, etc), and the Internet allows mediocre programmers to quickly grab algorithmic solutions for problems they couldn't solve alone. But nearly everyone writing professional code has to work within a team. And team productivity rises and falls dramatically on the ability of its members to communicate effectively and distribute workload efficiently, two skills which are highly subjective and impossible to prove by rigid formula.
Most software engineering principles are built on experience rather than objective law. Much like the social sciences, we study, learn, adapt and apply -- but with no real guarantees of outcome. All we can say is that some things seem to work better than others in most groups.
I think, a lot of it is necessarily determined by how much our mind is able to process at one time. So it comes down to how much the language and tools enable a team or a developer to break down the problem into chunks that are meaningful by themselves, but not so large that it becomes too hard to grasp them. The common theme is the art of organizing information (in this case, the code, the logic, ...) But that's not so different from Maths or Physics, by the way.
Just as the best authors borrow from many styles, the best programmers keep a huge range of patterns in their mental arsenal. Slavishly following a few patterns and adhering to some absolute truth is both lazy and dangerous.
Put it another way, the day we rely on robots for code review is the day I quit.
It all depends on your point of view :-)
But to answer your questions, I think one way to view subjectivity is to recognize that software languages, tools, and best practices are a shared means of communication among individuals. Yes, a programming language is a formal way of instructing a computer how to behave, but a programming language may also be viewed as a way to define and communicate specifications to a high level of detail (the code is the ultimate spec, is it not?).
So as far as we may want to concern ourselves with the degree of subjectivity in software languages, tools, and best practices, I would say that the lack of subjectivity may indicate how well communication is facilitated.
Yes, individuals have certain proclivities that are expressed in their habits and tendencies, but that should not ultimately matter too much in the perfect platform for development.
Turning to my Maths PhD wife I asked if there's any subjectivity in mathematics. Her answer is yes there is, mainly in the way we as humans achieve the answer.
If a mathematical proof is the result, how you get to that result can vary. If the dataset is large you may need to use a computer, which can introduce errors, and thus debated about whether that is the right approach. Or sometimes mathematicians can disagree on the theory - one is trying to prove that x is true while the other is trying to prove that x is false.
I think the same thing exists in computer science. A correct answer is a program that runs correctly, but that definition of correct may be different for each project. Sometimes correct means no bugs. Sometimes it means running efficiently.
From here programmers can argue how best to achieve the "correct" result. A good example of this is is the FizzBuzz application. A simple answer would be just a for loop, but Enterprise FizzBuzz is also "correct" in that it produces the correct answer, but is generally laughed at as "bad" engineering due to its overcomplication of the idea (it was a joke app after all).
How large a role does subjectiveness play in programming? I'd say it's a very large part of what we do, simply because we are human, and because there are multiple ways of getting the "correct" answer so there is disagreement over which way is the best.
Studies have been done showing that certain practices reduce defect rates in software. For instance, a study found a strong correlation between cyclomatic complexity and the probability of being fault-prone. Other studies show the average effectiveness of design and code inspections are 55 and 60 percent. So it appears to be in our best interests to favor simplicity, check metrics, and do code reviews.
We're talking probabilities here, though. If I review your code, I'm not guaranteed to find 60% of your bugs. There are also few absolutes in software development; experienced developers know that the correct answer is generally "it depends." That said, there are a number of practices with objective data in their favor.

Transitioning from "Web" into "Application" Development

I realize this comes at an enormous risk of being branded "subjective" and "discussion-based", but you don't have to argue with anyone, or me. I'd just like honest answers to the question. So first, the question:
In your experience, is it feasible to say I could find a job as a java or other "non-web" language/system developer without a CS degree?
A little background : I am a LAMP(PP) Developer, and have been working with the web world for the past two years or so, and am about 90% self-taught. [edit] I have been working part-time/freelance in html/css/javascript for about 7 years, and full-time salary doing php/perl for the past 2 years, for clarification. [/edit] A friend of mine who does a lot of Java has convinced me to start learning it, and I'm starting to be curious about my potential employability in a "non-web" environment. So far I've worked for marketing firms and doing application development for a web-based system, so having a Bachelor's degree in a non-related field (music) hasn't held me back yet.
Acceptable format for non-subjective answers :
"Our company does not require a specific degree, if you have a few years of employment history and can prove you know programming you can get a job"
-or-
"With the economy being the way it is, the only way to ensure you'll get past the first level of screening is to have an extensive relevant education"
Yes you can (otherwise I would not have a job). If you can show that you know the stuff, many companies will not bother too much about formal degrees. After all, they (should) hire people because they have certain skills, not because they acquired those skills in a specific manner. Studying at a university is one way, working with programming in practice is another.
Now, I think (guessing here, given my background) that you would learn things in the university that you will not typically pickup from being an autodidact, and that might still be useful when working as a programmer, but I would believe that the more time you have spent working on actual software that has been delivered to (hopefully happy) customers, the less the lack of formal degrees will matter.
In my experience, it pretty much depends on where you want to work, and in some cases, how far up the management ladder you want to go.
Some places I've worked don't really care if you have a degree or not. In fact, in one place I worked it was almost a detriment to have a degree, since much of management didn't have degrees.
Other places I've worked have required a "related" degree. In at least one case I was personally involved with, that was to their detriment, because a friend of mine was much better and more knowledgeable than any of the developers there, and looking for a new job, but they wouldn't even talk to her since she didn't have a degree.
Finally, for some employers, it's just whether you have a degree or not, and your major is not important. I know one guy doing Java development, and his degree is in history. Also, I work in a science facility, and lots of people here have degrees in a relevant science, and have little or no formal software development education.
It's a mindshift for sure, but I don't see any reason why you can't make the transition. A good programmer is a good programmer regardless of the language they're using. I've known guys that write excellent java code and easily make the transition to javascript or ruby. Where you might come unstuck is the fundamentals of computer science which is what you really get from the formal education. Things like pointers, memory management, threading etc are things that I tend to find "self taught" developers are usually not so strong in, but there's no reason you can't learn these things and if you're able to prove to prospective employers that you have a good grasp of the concepts and can prove you know how to use them then I don't think you'll have too much trouble.
In your case you should use your music degree as an advantage and look to land your first application programming job in a company doing music-related application development. Your music degree coupled with programming experience of any kind will certainly open some doors.
You may need to take an initial cut in pay to make the transition, though.
Speaking for my own experience and my country Switzerland: I started with an internal education in a big company and do not have a degree in CS up to now, some 23 years later. I had difficulties finding a job in the 2 years following the burst of the internet bubble, which I was able to breach being self-employed and with some unemployment money from the government.
Most big companies here do not care about degrees, unless you want to pursue a carrier. But then you need an MBA, not CS.
There is one exception, though, which are the consulting companies. They sell their services proportionally to the numbers of Doctors they have in the team, so no chance - unless you have connections.
Small companies here know that they have to invest for somebody to know the tools and languages they use - so if they are in need, they will hire you even with little experience in the exact field.
It might not harm to get some stuff done with the tools you would prefer to work with, but
good employers know that learning yet another language is not the hard part
work experience outweighs private experience
Go for it. Don't quit your dayjob and start looking around. After a few years experience will for sure outweigh education on the job market.
If you have the chance to visit some classes on the side, you can only benefit though. Personally.
My experience has always led me to evaluate past evidence. This means what is there to show what you have achieved. If you are entering a new technology, it will be a while before you will achieve this which means you may have to re-start at the bottom. Do you want to do that? If you are prepared to offer all your skills to a new job (including learning new languages), that would be better. In this way you become incrementally more valuable, irrespective of whether you have a degree or not.
I'm mostly self-taught, and in my career, I have transitioned from VB to C++/MFC to "classic" ASP to Java, and that doesn't even scratch the surface of all the ancillary technologies I had to learn to get my job done. I think all developers should expect to pick up new skills. It just comes with the job.
I think you're making an odd distinction between "Web" and "Application" development. Practically all of my Java work has involved building web sites, and the same is probably true for C#/.NET developers as well. As a LAMP guy, you already have a few key employable skills -- you know Linux, you understand (or should understand) the TCP/IP stack, how HTTP works, etc. And you know how to put together an attractive-looking interface, which is a rarer skill than you might think. You can leverage all these skills as you make the transition to Java/.NET/Whatever.
As for the music degree, don't sweat it. Some of the best programmers I've ever worked with have been musicians. I don't have a CS degree, and it hasn't slowed me down much. A certification might help get you in the door at some companies, but before that try a pet project or two in Java and see where it takes you.

How to keep a programming course interesting? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I guess, the following is a standard problem on every school or university:
It is Your job to teach programming. Unfortunately, some of the students
are semi-professionals and have years of experience while others do not even know the basic concepts, e.g. the concept "typed variable".
As far as I know, this leads to one of the following situations:
Programming is tought from its very basics. The experienced students get bored and discontinue to visit the lectures. As a consequence, they will miss even the stuff they do not already know.
Teachers and professors claim that they require basic knowledge (whatever that means). Inexperienced students cannot follow the lectures and a lot of them will focus on unimportant stuff (e.g. understanding every detail of a complex example while not getting the concept behind the example). Some of them will give up.
Universities invent an artificial programming language to give experienced programmers and newbies "equal chances". Most students will get frustrated about the "useless language".
Is there a fourth solution, which is better than those above?
IMO this is a problem based on the placement of the students, not something you should be too interested in dealing with on your end as a teacher.
If the course is an introduction to programming a computer, then you really need to start with the basics. If you have a classroom full of professionals who know how to program and they don't show, it was either a problem with your course description, or the school forcing them to take the class as a pre-req without allowing them to test out.
Your job should be to describe what you want to teach in the course description, and teach it. If students enroll who are overqualified, that's their problem. I think the only thing you really need to avoid is trying to make the course too advanced for beginners if your course really is for beginners.
I think the best way to keep it interesting is to bring up practical and interesting exercises along the theory. Taking a problem-solution approach is great (with interesting, funny, exciting, real-world problems). This requires the professor himself to have hands-on experience, work with new technologies and know them pretty well and not just teach what he had learned a couple decades ago.
The thing is, programming should be learned by practice. The instructor should focus on motivating students to code and try to solve problems themselves. This can be done by assigning a complete life-like project at the beginning of the course and working through the subproblems that occur in the project in the class. This way, students will have an idea why some specific feature in the programming language exists and where it might be useful.
Just a thought though. Not tried it! ;)
I recently attended a course in which there was a very wide spectrum of experience in programming among the students. They still managed to keep the experienced programmers in the class interested by having an exercise program in which they timed the practical parts of the exercises (the programming part), and posted the results in a high score chart. At the end of each lecture, the professor gave some pointers as to how we could improve our times even more. As we all know, all engineers love competing for topping such lists, so we kept showing up, and even learned a new thing or two.
The inexperienced students managed to complete the exercises too, even if they didn't care too much about their times.
Don't know if your course is one that can implement this solution, but if it is, you should really consider it.
I think there are a couple if things you can do to help bridge the gap between advanced and beginner students and to keep everyone interested and involved in the course.
Advanced Workshops
If it can be arranged (using PHD students etc.) run an optional weekly workshop which anyone can attend, but which is aimed at the more experianced students. Set a code task / challange each week and then at the workshop go through various solutions to the problem and discuss the implications and the theory behind the different choices.
This provides an interesting challange for the more experianced coders as they have something to get thier teeth into. It opens some debate and can help intermediate people grasp interesting concetps and if you get people to present thier solutions, it introduces an open reviewing style which is beneficial. It also helps the beginners in that you don't have to present them with really advanced concepts in the main lecture series just to keep the experienced people interested.
Student Involvement
Experienced people generally are experienced because they enjoy coding etc. and a lot of people love to share their knowledge. A really good way to use this, and to help both beginners and advanced students is to get the more advanced students involved in the teaching. If you run classes/labs where students complete exercises, try getting volunteers from the more experienced students to act as mentors/ supervisors for the labs. When the beginners struggle they can help out by explaining fine details or subtleties etc.
This can really help the beginners, as there are never usually enough staff available for everyone to be able to ask individual questions. It can also really benefit the more advanced people, as having to explain concept which you "know" is a great way to reinforce them in your own mind, and even to discover that you have subtle misunderstandings in your own knowledge.
Don't assume more than you need to; try to select programming environments which don't have too much intellectual baggage. You may think a C "Hello world" program is simple, but that requires understanding source files, compilation, static typing and block structuring. There are not easy concepts for a beginner. In comparison, typing "print 'hello world'" into a Python shell avoids them. Declarations, compound types, object orientation, pointers, floating point, recursion, modularity, threads, callbacks, modularity, networking, databases and so on are all major concepts which require effort to learn. And, there are plenty of fun things to be done without them. Your goal should be to get everyone in the group doing programming exercises as soon as possible.
Mixed ability teaching is hard; stream it by splitting the group up if you can. Maybe publish a quiz of basic concepts, and have an optional basic concepts section for those who didn't get 100%. Some people think they are experienced programmers but have misunderstood basic ideas.
If the course time available is too short to let people try lots of exercises, then I'd drop the more advanced material before I dropped practical work.
In one course I took, a large part of the course grade was derived from a end-of-term project which was announced in advance with extra credit available for assorted add-ons and frills. Sufficiently experienced student could start working on it while their less prepraed brethren were being taught the basics.
But as Dave Markle says, part of this is a matter of getting the right students into your class: you really want a cohort that is fiarly well matched at the start.
If you have many experienced students or this is an upperclassmen/graduate level-course, you should focus on integration into existing ecosystem. Being able to understand and integrate into an existing project rather than to always work from scratch is the most important skills you can give to give to those students.
Thus, programming assignments should come from real world scenarios. E.g., assign them tasks in an open source project. This can also make it more interesting, especially as their work may become part of a real world project.
If it's really beginners, tough luck, you will have to stick to the basics though if the students are non-CS majors, you can create problems from their own domains (e.g., engineering, chemistry, etc.)
I think your could be toast.
After some point, the difference is just to wide. It will take the whole year to get the beginners to the point that they can even understand stuff that wont bore the more advanced people.
However this clearly depends on the topic and setting. For some combinations of those the solution is teach to the level that the class is billed for. Those that are to advance will get board and quit, those that are to inexperienced with fall behind and quit. Don't worry about it to much as neither should have taken the class anyway. If on the other hand they need to take the class then some one further up the ladder messed up.
Sitting in your chair watching someone talk is boring (even if you talk well).
Things are interesting if you can achieve something, when you can manipulate the world and have a moment of success. So add as many practical exercises as you can and make really sure that they can do them in time and that they can do them successfully in time.
Nothing is more frustrating than to hear: "Well, I'm sorry that you couldn't complete it. You can find a solution here. Let's copy that and pretend it did work and move on." Examples during a course are simple and the people in front of you know that. So if they can't even solve the simple examples you bring along for them, what are they going to think?
I always think it is best to learn through practice. At the beginning of the course especially it is incredibly boring to teach language syntax in a lecture. It is far better to require your students to complete some work on their own or in a lab with assistants. This allows the more experienced students to get through the work quickly.
Once this is done you can have a lecture where you discuss some of the solutions to problems. Why they are good and why they are bad.
This works especially well if you also structure your course in such a way that students are always building on their rpevious work. The first week can be something simple like calculate how many days old I am from my birthday. A problem that is relatively simple mathematically but has a few weird cases. This might take several hours for someone inexperienced. Especially if they are learning syntax at the same time. But it gives them a simple goal to work toward.
After this you can expend on it. eg: take lasts weeks program and add functionality that allows it to batch process a file. This teaches people the importance of restructuring and refactoring, and can be expanded week after week. You may even want to distribute a good piece of work from the previous week for those that are falling behind to use. Obviously you will need to make sure people don't get too far behind, but this is a nice way to make sure that everyone feels that they have a fair shot at it even if their previous week's work wasn't too good. Those who are doing well will end
The key is to keep your lecture sessions relatively high level, and have people learn the syntax on their own, or with lab assistants. You can teach them different ways of thinking about a problem but the actual act of writing code is much easier to learn by doing it.
I once through a scheduling nightmare ended up teaching a beginner class and an advanced class in the same classroom at the same time. What I did was divide my time between the two levels by starting out giving the advanced group an assignment to do in class while I worked with the beginners and then giving the beginners an assignment while I worked with the advanced. You could do something similar (only having the groups self-select into the group they wanted to be in). Prepare some extra material for the more advanced ones and you are off to the races.
Another strategy is to keep everything at the beginner level, but offer the more advanced students some other material to do for extra credit (or even as substitution for some of the simpler tasks you require of the beginners). Discuss the more advanced assignments with them outside of class or individually while the class is working on practical work in the lab.
Keeping the lectures interesting with plenty of real world examples is helpful too. I tended to lecture as little as possible though and present the material more through class discussion and practical exercises and through asking leading questions. Making them find the information to answer your questions (and class participation was part of the grade) will make them pay more attention.
I also ended each semester with a course project that I only described what they had to do in order to get a B. An A would involve doing work beyond that including work in an area not covered in class. The more advanced students can then really shine by looking for really cool new things to do and even the beginners usually find a way to do something not covered by the course. It's amazing how much extra effort they will go to when they don't know how much more they have to do to get an A. Other instructors would be amazed at the quality of the end of the course projects I got and several of them started using the same method.
It may be better to break up a few areas of concern with what some would call, "Introductory Programming":
1) Introduction to personal computers and modern computing. Assuming that the course's software runs on windows, there may be some that need to cover the basics of a computer, e.g. what is a hard drive, keyboard, mouse, monitor, CPU, motherboard, etc. Note that this has nothing to do with even one line of code other than naming operating systems potentially. For some people this may be new to them and thus having a course that covers the basics, may well be worth it. Also in this course would be ways to use a mouse and all its various buttons, what are the various kinds of cables and connections people have, what are drivers, what are patches, what are parts of a network, e.g. firewall, router, load balancers, etc. The idea here isn't to get into how to configure a firewall perfectly, but rather that the person understands what various hardware components are for and possibly how to configure a home wireless network as the most complicated concepts taught.
2) Principles of programming. This would start with the idea of what are the steps are there to execute a sequence of commands. Things like printing and performing Mathematical operations, e.g. converting from Imperial to metric,would be covered with possibly sorting being the most complicated example, viewed from a variety of different algorithms and an understanding at a basic level of big-O notation.
3) Introduction to Data Structures and Advanced Programming. Now, let's introduce the concept of a relational database and how databases work in general and have projects with real world application, e.g. have each student take a list of something they have like DVDs or CDs and put these into a database schema to efficiently store all this data. Also, the idea of floating point arithmetic and its limitations, e.g. that a computer doesn't store the whole value of pi but rather an approximation that should be good enough in most cases.
4) Introduction to Parallel Programming and Operating Systems. Here you would have some in depth work in building an Operating System and handling how to write code that can run concurrently or in Parallel and how efficient are various programs under different circumstances.
That is how I could see someone breaking up programming so that it isn't where someone can learn in a week enough to pass the final without looking at anything else.
I have frequently been in this situation, first on the student side of things, and then on the teaching side.
Most schools force those sort of courses and their curriculum. This is silly, but such is life. If your school does allow it, I would suggest offering students attendance waiver if they pass an early screening test. It is in your interest and the interest of the freshmen to not sit in a class where a significant portion of the population is bored. Even being in a room with tons of people starting at their laptops harms discourse. Everyone is required to attend tests and submit assignments, but they at least don't have to show up.
Once you work with the novices, figure out if they're majors or nonmajors. Nonmajors will resent being in a CS course, you have to try and make it approachable for them. E.g., use examples from physics or chemistry or math rather than from building an interactive gui system.
If they're CS majors, they'd better damned be interested :)
My opinion is that teaching sample programs is dead-boring for most people. Searching, sorting, classification of 7-bit ascii input, using unix & make, opening a file, writing a file...
These are boring problems. Regardless of their importance/usefulness, these are tools. Unfortunately, tools are what's taught in intro courses, not problems.
But you need tools to be able to solve a problem. So it's a kind of chicken-egg problem.
Real world examples of code the student can imagine themselves doing out of their free time.
I remember a teacher telling me to use const values it was the tax of something. I only had to use the value in two places. She asked what if i need to change it i said its only in two places and i'll change it by hand also i couldnt imagine the gov ever changing the tax %.
I cant think of an non complex example where i would use a const so i wouldnt try teaching them to use that but for arrays i would simply write a guessing game then when the player wins the game, it plays back all the guesses in the same order to them. There is no easy way to do that w/o arrays and i could see how keeping track of someones steps/guesses would be useful (bragging rights to how quickly a person guessed it).
On the first day give the syllabus (what they will learn) and required basic knowlege (things you must know or else not take this class) and stick with it. After these all you can do is teach well (explain things well, answer questions, give a joke or two now and then etc). Caring who attends class, whether or not the field is boring, whether student lied about pre-requisites or not, who listens and the other yada yada is beyond your controls. Besides, you should expect adults to be adults. If students skip class and ace test, maybe that is best for them. If they skip class and bomb tests, well maybe they are in the wrong place.
I hated when professors had this mentality when I was in college. Now as a working professional I understand it.
Center the programming exercises around either sports or movies.

Development Cost of Procedural Programming vs. OOP?

I come from a fairly strong OO background, the benefits of OOD & OOP are second nature to me, but recently I've found myself in a development shop tied to a procedural programming habits. The implementation language has some OOP features, they are not used in optimal ways.
Update: everyone seems to have an opinion about this topic, as do I, but the question was:
Have there been any good comparative studies contrasting the cost of software development using procedural programming languages versus Object Oriented languages?
Some commenters have pointed out the dubious nature of trying to compare apples to oranges, and I agree that it would be very difficult to accurately measure, however not entirely impossible perhaps.
Most all of these questions are confounded by the problem that individual programmer productivity varies by an order of magnitude or more; if you happen to have an OO programmer who is one of the gruop at productivity x, and a "procedural" programmer who is a 10x programmer, the procedural programmer is liable to win even if OO is faster in some sense.
There's also the problem that coding productivity is usually only 10-20 percent of the total effort in a realistic project, so higher productivity doesn't have much impact; even that hypothetical 10x programmer, or an infinitely fast programmer, can't cut the overall effort by more that 10-20 percent.
You might have a look at Fred Brooks' paper "No Silver Bullet".
After poking around with google I found this paper here. The search terms I used are Productivity object oriented.
The opening paragraphs goes on to say
Introduction of object-oriented
technology does not appear to hinder
overall productivity on new large
commercial projects, but it neither
seems to improve it in the first two
product generations. In practice, the
governing influence may be the
business workflow and not the
methodology.
I think you will find that Object Oriented Programming is better in specific circumstances but neutral for everything else. What sold my bosses on converting my company's CAD/CAM application to a object oriented framework is that I precisely showed the exact areas in which it will help. The focus wasn't on the methodology as a whole but how it will help us sold some specific problem we had. For us was having a extensible framework for adding more shapes, reports, and machine controllers, and using collections to remove the memory limitation of the older design.
OO or procedural offer to different way to develop and both can be costly if badly managed.
If we suppose that the works are done by the best person in both case, I think the result might be equal in term of cost.
I believe the cost difference will be on how you will be the maintenance phase where you will need to add features and modify current features. Procedural project are harder to have automatic testing, are less subject to be able to expand without affecting other part and is more harder to understand the concept part by part (because cohesive part aren't grouped together necessary).
So, I think, the OO cost will be lower in the long run compared to Procedural.
i think S.Lott was referring to the "unrepeatable experiment" phenomenon, i.e. you cannot write application X procedurally then rewind time and write it OO to see what the difference is.
you could write the same app twice two different ways, but
you would learn something about the app doing it the first way that would help you in the second way, and
you may be better at OO than at procedural, or vice-versa, depending on your experience and the nature of the application and the tools chosen
so there really is no direct basis for comparison
empirical studies are likewise useless, for similar reasons - different applications, different teams, etc.
paradigm shifts are difficult, and a small percentage of programmers may never make the transition
if you are free to develop your way, then the solution is simple: develop things your way, and when your co-workers notice that you are coding circles around them and your code doesn't break nearly as often etc. and they ask you how you do it, then teach them OOP (along with TDD and any other good practices you may use)
if not, well, it might be time to polish the resume... ;-)
Good idea. A head-to-head comparison. Write application X in a procedural style, and in an OO style and measure something. Cost to develop. Return on Investment.
What does it mean to write the same application in two styles? It would be a different application, wouldn't it? The procedural people would balk that the OO folks were cheating when they used inheritance or messaging or encapsulation.
There can't be such a comparison. There's no basis for comparing two "versions" of an application. It's like asking if apples or oranges are more cost-effective at being fruit.
Having said that, you have to focus on things other folks can actually see.
Time to build something that works.
Rate of bugs and problems.
If your approach is better, you'll be successful, and people will want to know why.
When you explain that OO leads to your success... well... you've won the argument.
The key is time. How long does it take the company to change the design to add new features or fix existing ones. Any study you make should focus on that area.
My company had a event driven procedure oriented design for a CAM software in the mid 90's created using VB3. It was taking a long time to adapt the software to new machines. A long time to test the effects of bug fixes and new features.
With VB6 came along I was able to graph out the current design and a new design that fixed the testing and adaptation problem. The non-technical boss grasped what I was trying doing right away.
The key is to explain WHY OOP will benefit the project. Use things like Refactoring by Fowler and Design Patterns to show how a new design will lower the time to do things. Also include how you get from Point A to Point B. Refactoring will help with showing how you can have working intermediate stages that can be shipped.
I don't think you'll find a study like that. At least you should define what you mean by "cost". Because OOP designing is somehow slower, so on the short term development is maybe faster with procedural programming. On very short term maybe spaghetti coding is even more faster.
But when project begins growing things are opposite, because OOP designing is best featured to manage code complexity.
So in a small project maybe procedural design MAY be cheaper, because it's faster and you don't have drawbacks.
But in a big project you'll get stick very quickly using only a simple paradigm like procedural programming
I doubt you will find a definitive study. As several people have mentioned this is not a reproducible experiment. You will find anecdotal evidence, a lot of it. Some people may find some statistical studies, but I would examine them carefully. I am not aware of any really good ones.
I also will make another point, there is no such thing as purely object oriented or purely procedural in the real world. Many if not most object methods are written with procedural code. At the same time many procedural programs use OO methodologies such as encapsulation (also call abstraction by some).
Don't get me wrong, OO and procedural programs look and are different, but it is a matter of dark gray vs light gray instead of black and white.
This article says nothing about OOP vs Procedural. But I'd think that you could use similar metrics from your company for a discussion.
I find it interesting as my company is starting to explore the ROWE initiative. In our first session, it was apparent that we don't currently capture enough metrics on outcomes.
So you need to focus on 1) Is the maintenance of current processes impeding future development? 2) How are different methods going to affect #1?