As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Let's say your company has given you the time & money to acquire training on as many advanced programming topics that you can eat in a year, carte blanche. What would those topics be and how would you prefer to acquire them?
Assumptions:
You're still having deliverables to bring into existence, but you're allowed one week per month for the year for this training.
The training can come from anywhere. IE: Classroom, on-site instructor, books, subscriptions, podcasts, etc.
Subject matter can cover any platform, technology, language, DBMS, toolset, etc.
Concurrent/Parallel programming and multi-threading, especially with respect to memory models and memory coherency.. I think every programmer should be aware of the considerations in this arena as we move into a world of multi-core/multi-cpu hardware.
For this I would probably using Internet research most heavily; but an on-campus primer at a good university could be a good way to start off.
Security!
Far too many programmers just build something and think they can add security as an afterthought after finishing the "main" part of the program. You could always benefit from knowing more about how to secure your app, how to design software to be secure from the get-go, how to do intrusion detection, etc.
Advanced Database Development
Things like data warehousing (MDX, OLAP queries, star schemas, fact tables, etc), advanced performance tuning, advanced schema and query patterns, and the like are always useful.
Here are the three that I'm always finding myself explaining to junior developers who didn't get enough CS training. All that other stuff is generally more hype than substance, or can be fairly easily picked up. But if you don't know these three, you can do a great deal of damage:
Algorithm analysis, including Big O
Notation.
The various levels of
cohesion and coupling.
Amdahl's Law, and how it pertains to optimizations.
Internationalization issues, especially since it sounds like it would not be an advanced topic. But it is.
Accessibility
It's ignored by so many organizations but the simple fact of the matter is that there are a huge number of people with low or no vision, color blindness, or other differences that can make navigating the web a very frustrating experience. If everybody had at least a little bit of training in it, we might get some web based UIs that are a little more inclusive.
Object oriented design patterns.
I guess "advanced" is different for everyone, but I'd suggest the following as being things that most decent developers (i.e. ones that don't need to be told about NP-completeness or design patterns) could gain from:
Multithreading techniques that go
beyond "lock" and when to apply them.
In-depth training to learn and
habitualize themselves with clever
features in their toolchain (IDE/text
editor, debugger, profiler, shell.)
Some cryptography theory and hands-on experience with different common flaws in security schemes that people create.
If they program against a database, learn the internals of their database and advanced
query composition and tuning techniques.
Developers should know the basics in SQL development and how their decisions impact database performance. It is one thing to write a query it is another thing to write a query, understand the explain plan and make design decisions based off that output. I think a good course on PL/SQL development and database performance would be very beneficial.
Unfortunately communication skills seem to fall under the "advanced topics" section for most developers (present company excluded, of course).
Best way to acquire this skill: practice.
Take of the headphones, and talk to
someone instead of IM'ing or emailing
the guy at the next desk.
Pick up the phone and talk to a
client instead of lobbing an email
over the fence.
Ask questions at a conference instead of sitting behind your laptop
screen twittering.
Actively participate in a non-technical meeting at work.
Present something in public.
Most projects do not fail because of technical reasons. They fail because they could not create a team. Communication is vital to team dynamics.
It will not harm your career either.
One of the best courses I took was a technical writing course. It has served me well in my career.
Additionally: it probably does not matter WHAT the topic is - the fact that the organization is interested in it and is paying for it and the developers want to go and do go, is a better indicator of success/improvement than any one particular topic.
I also don't think it matters that much what the topic is. Dev organizations deal with so many things during a project that training and then on the job implementation/trial and error will always get you some better perspective - even if the attempts to try out/use the new stuff fail. That experience will probably help more on the subsequent projects.
I'm a book person, so I wouldn't really bother with instruction.
Not necessarily in this order, and depending on what you know already
OO Programming
Functional Programming
Data structures and algorithms
Parallel processing
Set based logic (essentially the theory behind sql and how to apply it)
Building parsers (I only put this, because it actually came up where I work)
Software development methodolgoies
NP Completeness. Specifically, how to detect if a problem is NP-Complete, and how to build an approximate solution to the problem.
I see this as important because you don't want a developer to try and solve an NP-complete problem by getting the optimum solution, unless the problem's search space is very small, in which case brute force is acceptable. However, as the search space increases, the time required to solve the problem increases exponentially.
I'd cover new technologies and trends. Some of the new technologies I'm researching/enhancing my skills with include:
Microsoft .NET Framework v3.0/v3.5/v4.0
Cloud Computing Frameworks (Amazon EC2, Windows Azure Services, GoGrid, etc.)
Design Patterns
I am from MS based developer world, so here is my take on this
More about new concepts in Cloud Computing (various API etc.). as the industry is betting on it for sometime.
More about LinQ for .net framework
Distributed databases
Refactoring techniques (which implies also learning to write a good set of unit/functional tests).
Knowing how to refactor is the best way to keep code clean -- it is rare when you get it right the first time (especially in new designs).
A number of refactorings, however, require a decent set of tests to check that the refactoring did not add unexpected behavior.
Parallel computing- the easiest and best way to learn it
Debugging
Debugging by David J. Agans is a good book on the topic. Debugging can be very complex when you deal with multi threaded programs, crashes, algorithms that doesn't work. etc. Everybody would be better off being good at debugging.
I'd vote for real-world battle stories. Have developers from other organizations present their successes and failures. Don't limit the presentations to technologies you're using. With a significantly complex project, this is bound to cut into 'advanced' topics you haven't even considered. Real-world successes (and failures) have a lot to teach.
Go to the Stack Overflow DevDays
and the ACCU conferences
Read
Agile Software Development, Principles, Patterns, and Practices (Robert C. Martin)
Clean Code (Robert C. Martin)
The Pragmatic Programmer (Andrew Hunt&David Thomas)
Well if you're here I would hope by now you have the basics down:
OOP Best practices
Design patterns
Application Security
Database Security/Queries/Schemas
Most notably developers should strive to learn multiple programming languages and disciplines, in order for their skill set to be expanded in more than one direction. They don't need to become experts in these other skills but at least have a very acute understanding of integration with their central discipline. This will make them much better developers in the long run, and also let them gain the ability to use all tools at their disposal to create applications that can transcend the limitations of a singular language.
Outside of programming specific topics, you should also learn how to work under Agile, XP, or other team based methodologies in order to be more successful while working in a team environment.
I think an advanced programmer should know how to get your employer to give you the time & money to acquire training on as many advanced programming topics that you can eat in a year. I'm not advanced yet. :)
I'd suggest an Artificial Intelligence class at a college/university. Most of the stuff is fun, easy to grasp (the basics at least), and the solutions to problems are usually creative.
Hitchhikers Guide to the Galaxy.
How would I prefer to acquire the training? I'd love to have a substantial amount of company time dedicated to self-training.
I totally agree on Accessabiitly. I was asked to look into it for the website at work and there is a real lack of good knowledge on the subject, not only a lack of CSS standards to aid in the likes of screen readers.
However my answer goes to GUI design - its quite a difficult thing to get right. There's too many awful applications out there that could be prevented just by taking the time to follow HCI (Human Computer Interaction) advice/designs. Take Google/Apple for inspiration when making a GUI - not your typical hundreds of buttons/labels combo that too often gets pushed out.
Automated testing: Unit testing, functional integration testing, non-functional testing
Compiler details (more relevant on some platforms than others): How does the compiler implement certain common constructs in language X? On a byte-code interpreted platform, how does JIT compilation work? What can be JIT-compiled (for example, can virtual calls be JIT compiled?)?
Basic web security
Common design idioms from other problem domains than the one you're working in at the moment.
I'd recommend learning about Refactoring, Test Driven Development, and various unit testing frameworks (NUnit, Visual Test, CppUnit, etc.) I'd also learn how to incorporate automated unit testing into your continuous integration builds.
Ultimately if you can prove your code does what it claims it can do, you don't have to be there to answer questions as to why or how. If a maintainer comes along and tries to "fix" your code, they'll know instantly if they broke it. Tests written around the requirements (use cases) explain to the maintainer what your users wanted it to do, and provide a little working example of how to call it. Think of unit tests as functional documentation.
Test Driven Development (TDD) is a more novel design approach that begins with the requirements, where you start by writing a test before you write the code. You then write exactly enough code required to pass the test. You have to stop before you write extra code (that you may never need), because you will refactor it later if you find that you really needed it.
What makes TDD cool is that a bad interface (such as one with lots of dependencies) is also very hard to write tests for. It's so hard that a coder would rather refactor the interface to make it easier to test. And that refactoring simplifies the code, removing inappropriate dependencies, or grouping related tests together to make it easier to test, thus improving cohesion. By making it immediately apparent to the developer when he's writing a badly interfaced module, the developer sticks to the architecture and gravitates to the principles of tight cohesion and loose coupling. Good interfaces are the natural result. And as a bonus, once you pass all your tests, you know you're done.
On the surface this seems like an easy question to answer, just enter your favorite pet peeve about what other developers can't do correctly. But when I read through the answers and gave it some thought, I realized that every "advanced topic" brought up was covered in my undergraduate computer science curriculum--20 years ago. And I doubt that OO, security, functional programming, etc. concepts have changed in that time. Sure the tools have, but I argue that tools are different than topics.
So what is an "advanced topic" in computer science? Who is the Turing, Knuth, Yourdon of the 21st century?
I don't have a clear answer to this question, though I'd like to see more work on theories for parallel programming that will enable tools to abstract that messy stuff for developers.
Quite funny that noone hasnt mentioned:
debugging.
tools & ide you work with
and platform you are developing to.
Everyday development is much more fun if you know your tools really well and you accomplish more and make your life easier if you know how to debug someone elses code at ease.
Source Control
Related
Based on my own experience and on experience of my friends I see that many companies have some strange ideas to develop their own frameworks and SW factories (builds skeleton of application for you). These ideas are usually based on belief that own framework will be much better than anything else available. How to deal with such ideas and how to explain that it is not always good way to go?
Why I think internal frameworks / factories are not good:
Budget & Resources - There is usually only some initial budget to create framework. Nobody thinks about budget needed to maintain and support framework. Nobody can even estimate budget and resources needed for maintaing. At the beginning nobody thinks about maintaining multiple versions of framework to support already existing applications.
Lack of experience - Framework is usually created by people without any such experience or with support of "consultants" - generally much more expensive people with similar skill set.
Architecture / design - any architectural problem in framework affects all applications build using this framework. Bad design decissions in framework force developers to code applications in the bad way.
Technical debt - bad code in framwork is technical debt.
False belief of silver bullet - managers believe that own framework / factory is silver bullet. All application will be written in the same way and it will be easily maintainable. My experience is that it is simple not truth. Even with SW factory each application is specific.
Insufficient documentation - documentation is first part affected by low budget. Framework without documentation is useless. Reflector (.NET) is my best friend.
Insufficient user group - internal framework has only small user group. Small user group means small experience. If I'm using public tool / framework and I have a problem, I can ask question on SO (or similar web) or just try to find answer on google. With internal framework it is not possible.
Policy - company policy force you to use the framework to vindicate framework costs. This goes so far that the framework is chosen before the first requirement is gathered.
Complains to framework are prohibited.
Usage of other frameworks is prohibited.
Why I think companies are doing this:
Arrogance & egoism - sombody in company believes that he can do it better.
Ignorance - ignoring existing frameworks / solutions and the fact that only good frameworks survived long enough to be popular. Ignoring user group and already available informations on the Internet.
Management failure and incompetence - not understanding the impact (especially long term) of this decission. Decission based on incorrect informations. Management without SW development background.
I understand that sometimes own solution or framework for specialized scenario is needed but I'm tired with all these "great internal frameworks" for creating web or desktop applications. Am I wrong? Are these frameworks really needed (.NET and Java world)? Can you provide me some example or reason why it is good to have internal framework / factory?
Edit:
Thanks for answers but I expected some advice how to deal with a problem as a developer (except changing a job) not as a manager.
In my experience the most common cause of excess frameworks is... bored developers! Uninspired developers find that developing frameworks to solve their problems is much more fun than actually solving those problems - the end result is frameworks that suffer from all of the above (because of course the developer only did the fun bits), and possibly don't even solve the actual problem (because the goal was to have fun, not to solve the problem).
The solution is tricky - its difficult to know what it is that motivates developers as everyone is motivated by different things, however motivated developers who are busy doing things that they enjoy don't see to suffer from this ailment!
That said, well thought out frameworks when properly used are definitely a good thing - However if its only going to be used internally then it might be better to instead think of it as an extension to re factoring and code re-use rather than as a framework.
A classic sign that someone is suffering from bored developer framework syndrome is when a framework is being developed to solve the general case when there isn't yet a solution for the specific case:
How can you know how to solve the general case until you have solved at least one specific case?
Until you've had to solve the general case a few times you are only guessing that a framework is even needed.
The second case of course leads to the worst sort of framework - the mammoth generic framework that is only ever used once, to solve a problem which is far simpler than the framework itself.
Instead consider these sorts of frameworks more as an exercise in "extensive refactoring" - if the framework is produced as a way of grouping and tidying common code on an as-needed basis then the framework will grow in size and complexity dynamically - having already solved the problem before you start producing frameworks is nice too as it means you are already experts in whatever it is the framework needs to do.
Finally, try to keep developers from being bored (they get up to all sorts of mischief otherwise!)
Generalization is bad, but here is what I've noticed, especially in large enterprise projects:
Such behavior is generally driven by one of few more software engineers (they usually name themselve software architect) that fall into the descriptions you wrote in your question. Everybody needs to be proud of something to have the courage to wake up in the morning. I will add that they are usually CV driven for that reason and try to apply the latest design patterns without thinking about business implications and ROI. The key is NOT to hire that kind of person (or try to coach him/her to understand business consequences of his/her choices). Trying to make them proud of working for the company instead of working on their framework may help too.
Missed deadlines, bugs, high turn-over tend to be solved by applying fancy methodologies (usually badly implemented) like scrum or hired highly priced consultants that will make things worse,... instead of removing (or coaching) the persons that shouldn't have been hired at the first place.
Removing the person in question is in most case a bad thing since he OWN the thing. So teach him/her to understand consequences of his/her choices is probably the most appropriate way to solve the problem. But to do that, you need a good manager.
So my only advice would be:
Hire better managers that understand very well both business and software development. They won't hire that kind of person or will try to teach them how to consider business in addition of pure software development. They will also understands that the most powerful motivation fuel for employees is making them proud of working for that company.
I would add some other reasons these things evolve, and I have seen this in more than one place:
- Developer lock-in. Once you have the devs coding in a non-transferable skill set, it's harder for them to leave.
- Author lock-in. Once you have several apps under maintenance dependent on the framework, you have the organisation dependent on the administrative group.
- Political control. The centralisation allows the framework to become a channel for political control.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have an interesting idea for a new programming language. It's based on a new programming paradigm that I've been working out in my head for some time. I finally got around to start working on a basic parser and interpreter for it a few weeks ago.
I want my new language to be successful and I want to eventually create a community around it when it's ready to release. The idea behind it is fairly innovative, so I don't expect it to gain a lot of ground in the business world, but it would thrill me more than anything else to see a handful of start ups use or open source projects use it.
So taking those aims into account, what can I do to help make my language successful? What do language projects do to become successful? What should I avoid at all costs? I'd love to hear opinions or stories about other languages -- successful or not -- so I can think about them as I continue to develop.
So far, the two biggest concerns on my mind are finding a market, access to existing libraries, having amazing tool support. What else might I add to this list?
The true answer is by having a beard.
http://blogs.microsoft.co.il/blogs/tamir/archive/2008/04/28/computer-languages-and-facial-hair-take-two.aspx
Although not specific to new programming languages, the book Producing Open Source Software by Karl Fogel (available to read online) may be contain some hints to the issue of making a community around your new programming language.
In terms of adoption of programming languages in general, it seems like the trend lately has been to have a rich library to make development times shorter.
As there isn't much detail on what your language is like, it's hard to determine whether adoption of the language is going to depend on the availability of a rich library. Perhaps your language will be able to fill a niche that has been overlooked by other languages and be able to gain users. Or perhaps it has a slick name that will draw people in -- there are many factors which can affect the adoption of a language.
Here are some factors that come to mind when thinking about recent successful languages:
Ability to leverage existing libraries in the new language.
Having an adapter to external libraries written in other languages.
Python allows access to code written in C through the Python/C API.
Targeting a platform which already has plenty of libraries available for use.
Groovy and Scala target the Java platform, therefore allowing the use of and interoperation between existing Java code.
Language design and syntax to allow increased productivity.
Many dynamically-typed languages have gained popularity, such as Ruby and Python to name a couple.
More concise and clear code can be written in languages such as Groovy, as opposed to verbose languages such as Java.
Offering features such as functions as first-class objects and closures which aren't offered in more "traditional" languages such as C and Java.
A community of dedicated users who also are willing to teach newcomers on the benefits of a language
The human factor is going to be big in wide-spread support for a language -- if people never start using your language, it won't gain more users.
Also, another suggestion that I could add is to make the development of your language open -- keep your users posted on developments in your language, and allow people to give you feedback. Better yet, let your users take part in the decision-making process, if you feel that is appropriate.
I believe that by offering ways to participate in the bringing up of a language, the more people will feel that they have a stake in the success of the new language, so the more likely it will gain more support.
Good luck!
Most languages that end up taking off rapidly do so by means of a killer app. For C it was Unix. Ruby had Rails. JavaScript is the only available programming system common to most browsers without third-party add-ons.
Another means of success is by fiat. This only works if you have significant clout. For example C#, as nice as a language as it might be, wouldn't be any where near as popular as it is now if Microsoft had not pushed it as hard as it does. Objective-C is the language of MacOS X simply because Apple says so.
The vast majority of languages, though, which lack a single killer app or a major corporate backer have gained success through long term investment of their respective creators. Perl and Python are prime examples. C++ has no single entity behind it, but it has evolved as the needs of developers have changed.
Don't worry about trying to make the language be successful; worry about using it to solve real problems and make real money.
You'll either make lots of money from using this language, or not. Once you have lots of money, others may care how you did it. Or not, either way you have lots of money.
If you don't make lots of money, nobody will want to know how you did it.
Edit based on comment: I define successful as people using it, and people use languages to solve problems, most for profit, thus successful == profitable.
In addition to making the language easy to use (which has several meanings), you should develop a comprehensive library that covers and also provides a good level of abstraction over (the following most important areas):
* Data structures and manipulation
* File I/O support
* XML processing
* Networking (plus web based technologies like HTTP/HTTPS)
* Database support
* Synchronous and asynchronous I/O
* Processes and threads
* Math
A well thought out framework that makes rapid development faster (and easier to maintain) would be a great addition. For this, you should know the currently popular frameworks well.
Keep in mind that it takes a lot of time. I think it took python about 10 years (someone please correct me if I'm wrong).
So even if your community still seems small after say, 5 years, that's not the end of the story.
"It's based on a new programming paradigm that I've been working out in my head for some time."
While laudable, odds are really good that someone has already done something with your "new" paradigm.
To make a language usable, it must build on prior art. Totally new is not a good path to success. My favorite example is Algol 68.
Algol 60 was wildly popular (back in the day, which is a while ago, admittedly).
The experts wanted to build on this success. They proposed some new paradigms, the effort split into factions. The purists put the new paradigms into Algol 68; it disappeared into obscurity. Some folks created a different version of Algol, called PL/I. It did not have any really new paradigms. It actually went somewhere and was used heavily. Another group created Pascal -- it didn't have much that was new -- it discarded things from Algol 60. It actually went somewhere ans was used heavily.
Your new paradigm must have a clear and concise summary so people can fit it into a context of where the language is usable, how it can be used, what the costs and benefits of using it are.
A "new programming paradigm" causes some people to say "why learn a completely new paradigm when the ones I have work so nicely?" You have to be very clear on how it helps to have a new paradigm.
The language and libraries must work, and work very, very well. A language that isn't rock-solid is worthless. In order to be rock-solid it must be very simple.
It has to have a tutorial that will help anyone get started with your language.
Good Framework for Common Tasks
Easy Installation/Deployment
Good Documentation
Debugger/IDE and other Tools
A popular flagship product that uses your language!
Good documentation, including a detailed reference manual as well as simple examples to get people started quickly.
Good library support so that people can actually write useful programs.
Most popular languages seem to be very strong in either or both or both of those.
Use Trojan Horse approach
C++ - The Forgotten Trojan Horse
An interesting article on why C++ can grab the heart of programmers successfully.
I come from a fairly strong OO background, the benefits of OOD & OOP are second nature to me, but recently I've found myself in a development shop tied to a procedural programming habits. The implementation language has some OOP features, they are not used in optimal ways.
Update: everyone seems to have an opinion about this topic, as do I, but the question was:
Have there been any good comparative studies contrasting the cost of software development using procedural programming languages versus Object Oriented languages?
Some commenters have pointed out the dubious nature of trying to compare apples to oranges, and I agree that it would be very difficult to accurately measure, however not entirely impossible perhaps.
Most all of these questions are confounded by the problem that individual programmer productivity varies by an order of magnitude or more; if you happen to have an OO programmer who is one of the gruop at productivity x, and a "procedural" programmer who is a 10x programmer, the procedural programmer is liable to win even if OO is faster in some sense.
There's also the problem that coding productivity is usually only 10-20 percent of the total effort in a realistic project, so higher productivity doesn't have much impact; even that hypothetical 10x programmer, or an infinitely fast programmer, can't cut the overall effort by more that 10-20 percent.
You might have a look at Fred Brooks' paper "No Silver Bullet".
After poking around with google I found this paper here. The search terms I used are Productivity object oriented.
The opening paragraphs goes on to say
Introduction of object-oriented
technology does not appear to hinder
overall productivity on new large
commercial projects, but it neither
seems to improve it in the first two
product generations. In practice, the
governing influence may be the
business workflow and not the
methodology.
I think you will find that Object Oriented Programming is better in specific circumstances but neutral for everything else. What sold my bosses on converting my company's CAD/CAM application to a object oriented framework is that I precisely showed the exact areas in which it will help. The focus wasn't on the methodology as a whole but how it will help us sold some specific problem we had. For us was having a extensible framework for adding more shapes, reports, and machine controllers, and using collections to remove the memory limitation of the older design.
OO or procedural offer to different way to develop and both can be costly if badly managed.
If we suppose that the works are done by the best person in both case, I think the result might be equal in term of cost.
I believe the cost difference will be on how you will be the maintenance phase where you will need to add features and modify current features. Procedural project are harder to have automatic testing, are less subject to be able to expand without affecting other part and is more harder to understand the concept part by part (because cohesive part aren't grouped together necessary).
So, I think, the OO cost will be lower in the long run compared to Procedural.
i think S.Lott was referring to the "unrepeatable experiment" phenomenon, i.e. you cannot write application X procedurally then rewind time and write it OO to see what the difference is.
you could write the same app twice two different ways, but
you would learn something about the app doing it the first way that would help you in the second way, and
you may be better at OO than at procedural, or vice-versa, depending on your experience and the nature of the application and the tools chosen
so there really is no direct basis for comparison
empirical studies are likewise useless, for similar reasons - different applications, different teams, etc.
paradigm shifts are difficult, and a small percentage of programmers may never make the transition
if you are free to develop your way, then the solution is simple: develop things your way, and when your co-workers notice that you are coding circles around them and your code doesn't break nearly as often etc. and they ask you how you do it, then teach them OOP (along with TDD and any other good practices you may use)
if not, well, it might be time to polish the resume... ;-)
Good idea. A head-to-head comparison. Write application X in a procedural style, and in an OO style and measure something. Cost to develop. Return on Investment.
What does it mean to write the same application in two styles? It would be a different application, wouldn't it? The procedural people would balk that the OO folks were cheating when they used inheritance or messaging or encapsulation.
There can't be such a comparison. There's no basis for comparing two "versions" of an application. It's like asking if apples or oranges are more cost-effective at being fruit.
Having said that, you have to focus on things other folks can actually see.
Time to build something that works.
Rate of bugs and problems.
If your approach is better, you'll be successful, and people will want to know why.
When you explain that OO leads to your success... well... you've won the argument.
The key is time. How long does it take the company to change the design to add new features or fix existing ones. Any study you make should focus on that area.
My company had a event driven procedure oriented design for a CAM software in the mid 90's created using VB3. It was taking a long time to adapt the software to new machines. A long time to test the effects of bug fixes and new features.
With VB6 came along I was able to graph out the current design and a new design that fixed the testing and adaptation problem. The non-technical boss grasped what I was trying doing right away.
The key is to explain WHY OOP will benefit the project. Use things like Refactoring by Fowler and Design Patterns to show how a new design will lower the time to do things. Also include how you get from Point A to Point B. Refactoring will help with showing how you can have working intermediate stages that can be shipped.
I don't think you'll find a study like that. At least you should define what you mean by "cost". Because OOP designing is somehow slower, so on the short term development is maybe faster with procedural programming. On very short term maybe spaghetti coding is even more faster.
But when project begins growing things are opposite, because OOP designing is best featured to manage code complexity.
So in a small project maybe procedural design MAY be cheaper, because it's faster and you don't have drawbacks.
But in a big project you'll get stick very quickly using only a simple paradigm like procedural programming
I doubt you will find a definitive study. As several people have mentioned this is not a reproducible experiment. You will find anecdotal evidence, a lot of it. Some people may find some statistical studies, but I would examine them carefully. I am not aware of any really good ones.
I also will make another point, there is no such thing as purely object oriented or purely procedural in the real world. Many if not most object methods are written with procedural code. At the same time many procedural programs use OO methodologies such as encapsulation (also call abstraction by some).
Don't get me wrong, OO and procedural programs look and are different, but it is a matter of dark gray vs light gray instead of black and white.
This article says nothing about OOP vs Procedural. But I'd think that you could use similar metrics from your company for a discussion.
I find it interesting as my company is starting to explore the ROWE initiative. In our first session, it was apparent that we don't currently capture enough metrics on outcomes.
So you need to focus on 1) Is the maintenance of current processes impeding future development? 2) How are different methods going to affect #1?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We have a series of closed source applications and libraries, for which we think it would make sense opening up the source code.
What has been blocking us, so far, is the effort needed to clean up the code base and documenting the source before opening up.
We want to open up the source only if we have a reasonable chance of the projects being successful -- i.e. having contributors. We are convinced that the code would be interesting to a large base of developers.
Which factors, excluding the "interestingness" and "usefulness" of the project, determine the success of an open source project?
Examples:
Cleanliness of code
Availability of source code comments
Fully or partially documented APIs
Choice of license (GPL vs. LGPL vs. BSD, etc...)
Choice of public repository
Investment in a public website
There are a several things which dominate the successfulness of code. All of these must be achieved for the slightest chance of adoption.
Market - There must be a market for your open source project. If your project is a orange juicer in space, I doubt that you'll be very successful. You must make sure your project gets a large adoption amongst users and developers. It is twice as likely to succeed if you can get other corporations to adopt it as well.
Documentation - As you touched on earlier documentation is key. Amongst this documentation is commented code, architectural decisions, and API notes. Even if your documentation contains bugs, or bugs about your software it is ok. Remember, transparency is key.
Freedom - You must allow your code to be "free" - by this I mean free as in speech, not as in beer. If you have a feeling your market is being a library for other corporations a BSD license is optimal. If your piece of software is going to run on desktops then GPL is your choice.
Transparency - You must write software in a transparent environment. Once you go open source there is no hidden secrets. You must explain everything you do, and what you are doing. This will piss off developers like no other
Developer Community - A strong developer community is required. This must be existing. Only about 5% of users contribute back to the project. If someone notices there haven't been any releases for a year they wont think "Wow, this piece of software is done," they will think "developers must of dumped it." Keep your developers working on it, even if it means they are costing you money.
Communications - You must make sure you community is able to communicate. They must be able to file bugs, discuss workarounds, and publish patches. Without feedback, it is pointless to opensource the project
Availability - Making your code easy to get is necessary, even if it means pissing off lawyers. You have to make sure your project is easy to download, and utilize. You don't want the user to have to jump through 18 nag screens and sign a contract in order to do this. You have to make things simple, and clean
I think that the single most important factor is the number of users that are using your project.
Otherwise its just a really well written, usefull and well documented bunch of stuff that sits on a server not doing very much...
To acquire contributors, you first need users, then you need some incompleteness. You need to trigger the "This is cool, but I really wish it had this or was different in this way." If you are missing an obvious feature, it's extremely likely a user will become a contributor to add it.
The most important thing is that the program be good. If its not good, nobody will use it. You cannot hope that the chicken-and-egg will reverse and that people will take it for granted until it becomes good.
Of course, "good" merely means "better than any other practical option for a significant number of people," it doesn't mean that its strictly the best, only that it has some features that make it, for many people, better than other options. Sometimes the program has no equivalent anywhere else, in which case there's almost no requirement in this regard.
When a program is good, people will use it. Obviously, it has to have a market among users--a good program that does something nobody wants isn't really good no matter how well its designed. One could make a point about marketing, but truly good products, up to a point, have a tendency to market themselves. Its much harder to promote something that isn't good, so clearly one's first priority should be the product itself, not promoting the product.
The real question then is--how do you make it good? And the answer to that is a dedicated, skilled development team. One person can rarely create a good product on their own; even if they're far better than the other developers, multiple perspectives has an incredibly useful effect on the project. This is why having corporate sponsors is so useful--it puts other developers' (from the corporation) minds on the problem to give their own opinion. This is especially useful in the case that developing the program requires significant expertise that isn't commonly available in the community.
Of course, I'm saying this all from experience. I'm one of the main developers on x264 (currently the most active one), one of the most popular video encoders. We have two main developers, various minor developers in the community that contribute patches, and corporate sponsorship from Joost (Gabriel Bouvigne, who maintains ratecontrol algorithms), from Avail Media (who I work for sometimes on contract and who are currently hiring coders on contract to add MBAFF interlacing support), and from a few others that pop up from time to time.
One good developer doesn't make a project--many good developers do. And the end result of this is a program that encodes video faster and at a far better quality than most commercial competitors, hardware or software, even those with utterly enormous development budgets.
In looking at these issues you might be interested in checking out the online version of a course on open source at UC Berkeley, called Open Source Development and Distribution of Digital Information: Technical, Economic, Social, and Legal Perspectives. It’s co-taught by Mitch Kapur (Lotus founder) and Paula Samuelson, a law school professor. I have a long commute and put the audio of the course on my iPod last year – they talk a lot about what works, what doesn’t and why, from a very broad (though obviously academic) perspective.
Books have been written on the subject. In fact, you can find a free book here: producing open source software
Really, I think the answer is 'how you run the project'.
All of your examples matter, yes, but the key things are how the interaction between developers is managed, how patches etc are handled/accepted, who's 'in charge' and how they handle that responsibility, and so on and so forth.
Compare and contrast (the history isn't hard to track down!) the management of the development of Class::DBI and DBIx::Class in Perl.
I was just reading tonight an excellent post on the usability aspect of successful vs unsuccessful open source projects.
Excerpt:
A lot of bandwidth has been wasted arguing over the lack of usability in open-source software/free software (henceforth “OSS”). The debate continues at this moment on blogs, forums, and Slashdot comment threads. Some people say that bad usability is endemic to the entire OSS world, while others say that OSS usability is great but that the real problem is the closed-minded users who expect every program to clone Microsoft. Some people contend that UI problems are temporary growing pains, while others say that the OSS development model systematically produces bad UI. Some people even argue that the GPL indirectly rewards software that’s difficult to use! (For the record, I disagree.)
http://humanized.com/weblog/2007/10/05/make_oss_humane/
Just open-source it. Most probably, nobody will start contributing yet. But at least you can write on the press-releases that your product is GPL or whatever.
The first step is that people start using it...
And maybe then, after users get comfortable, they will start contributing.
Everyone's answers have been good so far, but there's one thing missing and that's good oversight. Nothing kills an open source project faster than not having some sort of project management. Not to tell people what to do so much as to just add some structure and tasking for the developers you are hoping to attract.
Disorganized projects fall apart fast. It's not a bird you just let go and watch it fly away.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am a student studying software development, and I feel programming, in general, is too broad of a subject to try to know everything. To be proficient, you have to decide which areas to focus your learning and understanding. Certain skill sets synergize with each other, like data-driven web development and SQL experience. However, all the win32 API experience in the world may not directly apply to linux development. This leads me to believe, as a beginning programmer, I should start deciding where I want to specialize after I have general understanding of the basic principles of software development.
This is a multi-part question really:
What are the common specializations within computer programming and software development?
Which of these specializations have more long-term value, both as a foundation for other specializations and/or as marketable skills?
Which skill sets complement each other?
Are there any areas of specialization that hinder your ability of developing other areas of specialization.
Ben, Almost all seasoned programmers are still students in programming. You never stops learning anything when you are a developer. But if you are really starting off on your career then you should be least worried about the specialization thing. All APIs, frameworks and skills that you expect that gives you a long term existence in the field is not going to happen. Technology seems changing a lot and you should be versatile and flexible enough to learn anything. The knowledge you acquire on one platform/api/framework doesn't die off. You can apply the skills to the next greatest platform/api/framework.
That being said you should just stop worrying about the future and concentrate on the basics. DataStructures, Algorithm Analysis and Design, Compiler Design, Operating system design are the bare minimum stuff you need. And further you should be willing to go back and read tho books in those field any time in your career. Thats all is required. Good luck.
Sorry if I sounded like a big ass advisor; but thats what I think. :-)
Not to directly reject your premise but I actually think being a generalist is a good position in programming. You will certainly develop expertise in specific areas but it is likely to be a product of either personal interest or work necessity. Over time the stuff you are able to transfer across languages and problem domains is at the heart of what makes good programmers.
I think the more important question is: What areas of specialization are you most interested in?
Once you know, begin learning in that area!
I would think the greatest skill of all would be to adapt with the times, because if your employer can see this potential in you then they would be wise to hold on tightly.
That said, I would advise you dive into the area YOU would enjoy. Learning is driven by enthusiasm.
Since my current employ is with an internet provider, I've found networking knowledge particularly helpful. But someday I'd like to play with 3D graphics (not necessarily games).
Go as deep as you can starting off in one environment, win32, .net, Java, Objective C... whatever.
It is important to build the deep understanding of how X works... so that you can translate the same concepts into other languages or platforms/environments, if you so desire.
"Are there any areas of specialization that hinder your ability of developing other areas of specialization." Sort of, but nothing permanent i think.
Since I am relatively green myself (less than 4 years) I come from a really OOP mindset. I've rarely jumped out of .NET, so I had a hard time on one job when coming into contact with embedded code. With embedded programmers fearing object creation and the performance loss of inheritance. I had to learn the environment, seriously low memory and slow clock times, they were coming from. Those are times to grow, I had a better time at it because i understood my area pretty well.
I will say if you pick something to specialize in for marketability and money, you will probably burn out fast. If you do start to specialize pick something you enjoy. I love GUI programing and hate server side stuff, my buddy is the opposite, but we both love our jobs. If he had to do my job, and I his, we would both go insane out of boredom.
As a student I'd recommend forgetting about what you're programming and focusing on the software process itself. Understand how to analyse a problem and ask the right questions; learn every design pattern you can and actually apply them all to gain a real understanding and appreciation of object-oriented design; write tests and then code only as much as you need to in order to make the tests pass. I think the best way to really learn is to just code as much as you can - the language and the domain aren't important, browse sourceforge and freshmeat for any interesting-sounding projects and get involved. What's important is understanding the fundamentals of software engineering.
And yes, this includes C. Or Assembler. This is the easiest way to get a good understanding of how your computer works and what your high-level code is actually doing.
Finally, never stop learning - Service-oriented architecture, inversion of control, domain-specific languages, business process management are all showing huge benefits so they're important to be aware of - But by the time you finish studying and join the workforce who knows what the next big thing will be?