As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
So many buzzwords. Not sure if I need to start playing BS Bingo or not. And I'm not trying to be cynical. But I've heard many people with these various titles. There never seems to be a clear delineation between the three. Or there's a lot of domain crossover between the three. Actually, another I've seen while looking around here on Stackoverflow has been "Solutions Architect" as well. But that one doesn't seem to be so prevalent in other places.
There are questions here and there with vague answers. But I'd like definative answers to this. Please assume I'm still relatively new to software stuff and that I'm trying to map out a career path.
Oh, and please be gentle folks; this most definitely is not a duplicate question. Neither is it an aggregate. So kindly leave it alone. Xp
Like any other such term, these terms are used differently in different places, and are sometimes interchangeable. Here's what the differences typically are:
The Application Architect is is what many of us just call the Architect. The person responsible for the highest levels of design and scope for a particular solution/project. You'd bother using Application in the title if there were other types of architects around, and you wanted it clear that this person worries mainly about a particular application.
The Enterprise Architect is worried
about all of a companies solutions.
How they rely on each other, how they
use each other, how efficient can
their common upkeep and improvement
be made. He thinks about how all the solutions together support the company's mission. Only a larger company could warrant this grandiose title. The Enterprise Architect is a big shot who meets with the CIO, CTO, and other such big shots.
The Systems Architect might be considered to have a wider scope than the Application Architect, and less than an Enterprise Architect. This title is sometimes the exact same thing as Application Architect - big shot on a particular project. Sometimes the System part of the title cannotes a wider scope: person who duties include software but also hardware and IT, or someone worried about multiple projects.
I've encountered quite a few architects with varying titles over the years, and I definitely have my own view on the role of each of them, however it does tend to vary from company to company, and even more so from country to country.
Enterprise Architect - responsible for strategic thinking, roadmaps, principles, and governance of the entire enterprise. Usually has a close relationship with the business, vendors, and senior IT management.
Solutions or Systems Architect - responsible for desiging a high level solution to a specific set of business requirements, within the framework laid down by the enterprise architecture team. This solution may span multiple applications.
Integration or SOA Architect - responsible for the integration and/or business service strategy and governance. Since this role usually spans the entire enterprise it is often performed by an EA, however an integration architect will usually work at a more detailed and technical level than an EA. In a SOA, an integration architect may also be a member (or even leader) of the centre of excellence (or integration competency centre)
Information Architect - responsible for defining data models, master data management / ownership solutions, and data quality processes to support transactional, reference data, and reporting systems throughout the enterprise. Often performed by an EA.
Application Architect - responsible for implementation of, and processes within, a specific application or suite of applications. The application in question may be bespoke of a customised of-the-shelf product. Should have deep knowledge of the product/application and will often be consulted by other architects as part of a larger solution
Those are the most common titles in my experience, but there are other architecture roles that you'll see from time-to-time:
Technical Architect (for me this is the same as an application architect but others may disagree
Business Architect (defines business strategy and processes)
Infrastructure Architect (servers, platform decisions, and often security)
Network Architect (obvious)
Well it sounds like your in school still but looking into the business world and expecting titles on the cards to be as structured as a degree system... its not.
Truth be told the business title is either a company specific job-title to support the internal pay-grade system or the org charts. Its hard to take from one company to another on anything more than very general terms. Its not really a benchmark like a BA or PHD.
As far as titles and job roles go, teams in companies generally break down into Infrastructure (Server setup and networking), and Applications (DB admins, developers) groups. There is a lot of variance by company but that seems to be pretty universal.
I think your best bet in planning is to decide what you like to do, and then study to do it very well. Along the way as an IT guy you have to pick up a little in all areas to be very effective in any area anyway..
Good luck!
What each means really is determined by the company you work for; what some call a Solution Architect others call a System Architect. They're all (assumed) senior positions with some implied oversight of system/software design but aside from that I've never seen one definition that will make everyone happy. "Enterprise" does imply some scope to the position, but it's arguably no different than "Systems" (again, depends on the company). Application Architect usually makes me think of someone who's been around as a senior developer for a while and HR needed a new title to reward performance. Sort of like the way some massive companies lob the "Vice President" title around to the point it doesn't really mean anything besides how many vacation days you get.
What the Hell..
Architect:
Application - designs applications
Systems - designs multiple applications and coordinates into a system
Enterprise - designs systems and coordinates with the other non system aspects of the business enterprise.
Let's not forget salary, bonus and how much time you spend on worrying about your title.
Enterprise Architect is biggest role of all. He has to have complete technological implementation knowledge.
Related
Based on my own experience and on experience of my friends I see that many companies have some strange ideas to develop their own frameworks and SW factories (builds skeleton of application for you). These ideas are usually based on belief that own framework will be much better than anything else available. How to deal with such ideas and how to explain that it is not always good way to go?
Why I think internal frameworks / factories are not good:
Budget & Resources - There is usually only some initial budget to create framework. Nobody thinks about budget needed to maintain and support framework. Nobody can even estimate budget and resources needed for maintaing. At the beginning nobody thinks about maintaining multiple versions of framework to support already existing applications.
Lack of experience - Framework is usually created by people without any such experience or with support of "consultants" - generally much more expensive people with similar skill set.
Architecture / design - any architectural problem in framework affects all applications build using this framework. Bad design decissions in framework force developers to code applications in the bad way.
Technical debt - bad code in framwork is technical debt.
False belief of silver bullet - managers believe that own framework / factory is silver bullet. All application will be written in the same way and it will be easily maintainable. My experience is that it is simple not truth. Even with SW factory each application is specific.
Insufficient documentation - documentation is first part affected by low budget. Framework without documentation is useless. Reflector (.NET) is my best friend.
Insufficient user group - internal framework has only small user group. Small user group means small experience. If I'm using public tool / framework and I have a problem, I can ask question on SO (or similar web) or just try to find answer on google. With internal framework it is not possible.
Policy - company policy force you to use the framework to vindicate framework costs. This goes so far that the framework is chosen before the first requirement is gathered.
Complains to framework are prohibited.
Usage of other frameworks is prohibited.
Why I think companies are doing this:
Arrogance & egoism - sombody in company believes that he can do it better.
Ignorance - ignoring existing frameworks / solutions and the fact that only good frameworks survived long enough to be popular. Ignoring user group and already available informations on the Internet.
Management failure and incompetence - not understanding the impact (especially long term) of this decission. Decission based on incorrect informations. Management without SW development background.
I understand that sometimes own solution or framework for specialized scenario is needed but I'm tired with all these "great internal frameworks" for creating web or desktop applications. Am I wrong? Are these frameworks really needed (.NET and Java world)? Can you provide me some example or reason why it is good to have internal framework / factory?
Edit:
Thanks for answers but I expected some advice how to deal with a problem as a developer (except changing a job) not as a manager.
In my experience the most common cause of excess frameworks is... bored developers! Uninspired developers find that developing frameworks to solve their problems is much more fun than actually solving those problems - the end result is frameworks that suffer from all of the above (because of course the developer only did the fun bits), and possibly don't even solve the actual problem (because the goal was to have fun, not to solve the problem).
The solution is tricky - its difficult to know what it is that motivates developers as everyone is motivated by different things, however motivated developers who are busy doing things that they enjoy don't see to suffer from this ailment!
That said, well thought out frameworks when properly used are definitely a good thing - However if its only going to be used internally then it might be better to instead think of it as an extension to re factoring and code re-use rather than as a framework.
A classic sign that someone is suffering from bored developer framework syndrome is when a framework is being developed to solve the general case when there isn't yet a solution for the specific case:
How can you know how to solve the general case until you have solved at least one specific case?
Until you've had to solve the general case a few times you are only guessing that a framework is even needed.
The second case of course leads to the worst sort of framework - the mammoth generic framework that is only ever used once, to solve a problem which is far simpler than the framework itself.
Instead consider these sorts of frameworks more as an exercise in "extensive refactoring" - if the framework is produced as a way of grouping and tidying common code on an as-needed basis then the framework will grow in size and complexity dynamically - having already solved the problem before you start producing frameworks is nice too as it means you are already experts in whatever it is the framework needs to do.
Finally, try to keep developers from being bored (they get up to all sorts of mischief otherwise!)
Generalization is bad, but here is what I've noticed, especially in large enterprise projects:
Such behavior is generally driven by one of few more software engineers (they usually name themselve software architect) that fall into the descriptions you wrote in your question. Everybody needs to be proud of something to have the courage to wake up in the morning. I will add that they are usually CV driven for that reason and try to apply the latest design patterns without thinking about business implications and ROI. The key is NOT to hire that kind of person (or try to coach him/her to understand business consequences of his/her choices). Trying to make them proud of working for the company instead of working on their framework may help too.
Missed deadlines, bugs, high turn-over tend to be solved by applying fancy methodologies (usually badly implemented) like scrum or hired highly priced consultants that will make things worse,... instead of removing (or coaching) the persons that shouldn't have been hired at the first place.
Removing the person in question is in most case a bad thing since he OWN the thing. So teach him/her to understand consequences of his/her choices is probably the most appropriate way to solve the problem. But to do that, you need a good manager.
So my only advice would be:
Hire better managers that understand very well both business and software development. They won't hire that kind of person or will try to teach them how to consider business in addition of pure software development. They will also understands that the most powerful motivation fuel for employees is making them proud of working for that company.
I would add some other reasons these things evolve, and I have seen this in more than one place:
- Developer lock-in. Once you have the devs coding in a non-transferable skill set, it's harder for them to leave.
- Author lock-in. Once you have several apps under maintenance dependent on the framework, you have the organisation dependent on the administrative group.
- Political control. The centralisation allows the framework to become a channel for political control.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How to successfully implement democratic (non-BDFL controlled) type of management for the open source project?
More specifically - for the project using distributed source repositories.
What style of communication is best to adopt in such environment?
How to encourage merging branches into the master?
I am mostly interested in establishing the situation where people can directly merge into the master branch under the "social contract" agreement that they follow the project roadmap (which they themselves help to define) and that code they commit is tested.
I specifically want to encourage workflow
define the problem->define requirements and specific metrics of success->architect->build and test
The reason is that - I often see emails like here is the problem and here is how I think it should be solved
Immediately somebody else jumps in and starts a fight.
Not productive at all.
Often disagreement of that kind stems from not being on the same page on the problem definition, requirements or architecture. Or sometimes just because nobody even thought about such things.
How to encourage people to analyze the problem properly, share great ideas and select the best solutions?
How to organize communication so to avoid silly fights, make good decisions without being overly bureaucratic and move along at a good pace ?
Would you have any suggestions? Are there examples of projects managed this way?
How do you think adoption of distributed revision control instead of centralized affects the style of project management?
edit: found some interesting links in related questions
http://gettingreal.37signals.com/toc.php
http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/
Sorry for this somewhat off-topic answer (i.e. one not addressing directly the question). Please edit at your heart's delight!
Projects _do_ need executive governance!
Such may come from a single Benevolent (or not) Dictator, or a [IMHO, small] group, possibly open, of diverse but like-minded individuals. A standard joke on this is that: "The group should be made of an odd number of members, and 3 is already too many"; in truth, small collegial committees can be very effective.
This requirement for a "non fully democratic" decision making entity is however somewhat aligned with the processes suggested in the question. To effectively harness the good will of contributors to the project, the executive team needs to
be perceived as legitimate
communicate effectively
empower "the masses" to contribute with roadmap definition, problem identification, solution scoping and all other design-level tasks. (All the while it remaining clear that in the end, and after all has been said, final decision will be done in committee.
DELIVER a good and vibrant product (BTW by adopting agile development processes, the time between deliveries is lessened)
compromise when needed
advocate the interest of pooling resources in a coordinated fashion, rather than branching out.
Share the glory!
To support all of this formal documentation and processes are very useful. For example, the define the problem->define requirements and specific metrics of success->architect->build procedure indicated in the question can be implemented in the form a single collaboratively edited document (wiki-based or other), i.e. one per issue/idea. This document is templated with a defined format: required attributes such as date, initial posting info... and sections which are opened (and closed) for editing following a given schedule. This allows keeping a clean(er) record of the way the collective though of a given issue, what was suggested etc, what was [authoritatively] decided and why.
With such a process, the community feels engaged and hopefully individuals not too disappointed when the eventual decisions do not go "their" way. They can read and comment on the what and why of the decisions.
Another useful approach is to reward effective participation by informally (or factually) provide more weight to the contributors who effectively help with the project. The more active members can either gain their way into the "inner circle" or be handed-out leadership roles on subsets of the project.
Finally, the leaders of a project (whether in the context of a "democratic" or of a "partially dicatorial" leadership) need to be ever vigilant about "peace keeping". Contributors to Open Source projects are typically energetic, smart and opinionated individuals; conflict of opinion, conflict of personalities, impatience etc. is to be expected. These clashes however can be eased by systematically addressing issues with clear facts, moderating/editing aggressively against name calling and non-productive forms etc.
Originally posted on MetaOptimize.
Constitution for Governance of Open-Source Projects (v20100227)
Let it be affirmed that the primary goal in instituting governance of an open-source project be to ensure the long-term health of the project.
Accordingly, the default bias should be towards openness and inclusiveness.
However, policy should be changed as issues present themselves, in order to maintain the long-term health of the project.
For the model of decision making, we favor a "do-ocracy".
The people who contribute the most generally command the respect of the community.
Alienating them is the best way to derail the project.
The repository should be open the committers, given that commits can easily be reverted and commit-access easily revoked. This is preferable to alienating potential committers.
To ensure transparency for developers new and old, and allow them to decide their involvement in a project based upon the history of the project, their should be transparency and openess in the inner working of the project. For example, the email archive should be public.
Lastly, let us remember that too much red-tape gets in the way of progress. So red-tape and other barriers to contribution should be avoided, and only added as issues present themselves.
This Constitution can and should be amended as issues present themselves.
Therefore be it resolved.
access to repository
there are several options ranging from one controlling committer to anyone can commit - but upon social agreement that they respect the project guidelines and the roadmap.
in my opinion distributed repository allows you to be more relaxed with who you allow to commit, because there are multiple backups and the repo becomes virtually unbreakable.
on a separate note - anyone can commit approach - which I would like to test out - sounds more like a "wiki". I can directly compare this with my experience managing a wiki (nmrwiki.org): where despite complete openness - where not even user registration is required to edit the resource, people are often wary about "breaking stuff" and this worry becomes a mental barrier to making a contribution. So a permissive approach to repository management may just work.
Communication style:
email and mailing lists:
discuss everything concerning the project in public (?)
when asking questions - do not bundle
questions with your own opinions
encourage people to propose more than one solution
ask people to weigh pros and cons
be brief and to the point
avoid frivolous language that can be perceived negatively with people that do not know you well
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Let's say your company has given you the time & money to acquire training on as many advanced programming topics that you can eat in a year, carte blanche. What would those topics be and how would you prefer to acquire them?
Assumptions:
You're still having deliverables to bring into existence, but you're allowed one week per month for the year for this training.
The training can come from anywhere. IE: Classroom, on-site instructor, books, subscriptions, podcasts, etc.
Subject matter can cover any platform, technology, language, DBMS, toolset, etc.
Concurrent/Parallel programming and multi-threading, especially with respect to memory models and memory coherency.. I think every programmer should be aware of the considerations in this arena as we move into a world of multi-core/multi-cpu hardware.
For this I would probably using Internet research most heavily; but an on-campus primer at a good university could be a good way to start off.
Security!
Far too many programmers just build something and think they can add security as an afterthought after finishing the "main" part of the program. You could always benefit from knowing more about how to secure your app, how to design software to be secure from the get-go, how to do intrusion detection, etc.
Advanced Database Development
Things like data warehousing (MDX, OLAP queries, star schemas, fact tables, etc), advanced performance tuning, advanced schema and query patterns, and the like are always useful.
Here are the three that I'm always finding myself explaining to junior developers who didn't get enough CS training. All that other stuff is generally more hype than substance, or can be fairly easily picked up. But if you don't know these three, you can do a great deal of damage:
Algorithm analysis, including Big O
Notation.
The various levels of
cohesion and coupling.
Amdahl's Law, and how it pertains to optimizations.
Internationalization issues, especially since it sounds like it would not be an advanced topic. But it is.
Accessibility
It's ignored by so many organizations but the simple fact of the matter is that there are a huge number of people with low or no vision, color blindness, or other differences that can make navigating the web a very frustrating experience. If everybody had at least a little bit of training in it, we might get some web based UIs that are a little more inclusive.
Object oriented design patterns.
I guess "advanced" is different for everyone, but I'd suggest the following as being things that most decent developers (i.e. ones that don't need to be told about NP-completeness or design patterns) could gain from:
Multithreading techniques that go
beyond "lock" and when to apply them.
In-depth training to learn and
habitualize themselves with clever
features in their toolchain (IDE/text
editor, debugger, profiler, shell.)
Some cryptography theory and hands-on experience with different common flaws in security schemes that people create.
If they program against a database, learn the internals of their database and advanced
query composition and tuning techniques.
Developers should know the basics in SQL development and how their decisions impact database performance. It is one thing to write a query it is another thing to write a query, understand the explain plan and make design decisions based off that output. I think a good course on PL/SQL development and database performance would be very beneficial.
Unfortunately communication skills seem to fall under the "advanced topics" section for most developers (present company excluded, of course).
Best way to acquire this skill: practice.
Take of the headphones, and talk to
someone instead of IM'ing or emailing
the guy at the next desk.
Pick up the phone and talk to a
client instead of lobbing an email
over the fence.
Ask questions at a conference instead of sitting behind your laptop
screen twittering.
Actively participate in a non-technical meeting at work.
Present something in public.
Most projects do not fail because of technical reasons. They fail because they could not create a team. Communication is vital to team dynamics.
It will not harm your career either.
One of the best courses I took was a technical writing course. It has served me well in my career.
Additionally: it probably does not matter WHAT the topic is - the fact that the organization is interested in it and is paying for it and the developers want to go and do go, is a better indicator of success/improvement than any one particular topic.
I also don't think it matters that much what the topic is. Dev organizations deal with so many things during a project that training and then on the job implementation/trial and error will always get you some better perspective - even if the attempts to try out/use the new stuff fail. That experience will probably help more on the subsequent projects.
I'm a book person, so I wouldn't really bother with instruction.
Not necessarily in this order, and depending on what you know already
OO Programming
Functional Programming
Data structures and algorithms
Parallel processing
Set based logic (essentially the theory behind sql and how to apply it)
Building parsers (I only put this, because it actually came up where I work)
Software development methodolgoies
NP Completeness. Specifically, how to detect if a problem is NP-Complete, and how to build an approximate solution to the problem.
I see this as important because you don't want a developer to try and solve an NP-complete problem by getting the optimum solution, unless the problem's search space is very small, in which case brute force is acceptable. However, as the search space increases, the time required to solve the problem increases exponentially.
I'd cover new technologies and trends. Some of the new technologies I'm researching/enhancing my skills with include:
Microsoft .NET Framework v3.0/v3.5/v4.0
Cloud Computing Frameworks (Amazon EC2, Windows Azure Services, GoGrid, etc.)
Design Patterns
I am from MS based developer world, so here is my take on this
More about new concepts in Cloud Computing (various API etc.). as the industry is betting on it for sometime.
More about LinQ for .net framework
Distributed databases
Refactoring techniques (which implies also learning to write a good set of unit/functional tests).
Knowing how to refactor is the best way to keep code clean -- it is rare when you get it right the first time (especially in new designs).
A number of refactorings, however, require a decent set of tests to check that the refactoring did not add unexpected behavior.
Parallel computing- the easiest and best way to learn it
Debugging
Debugging by David J. Agans is a good book on the topic. Debugging can be very complex when you deal with multi threaded programs, crashes, algorithms that doesn't work. etc. Everybody would be better off being good at debugging.
I'd vote for real-world battle stories. Have developers from other organizations present their successes and failures. Don't limit the presentations to technologies you're using. With a significantly complex project, this is bound to cut into 'advanced' topics you haven't even considered. Real-world successes (and failures) have a lot to teach.
Go to the Stack Overflow DevDays
and the ACCU conferences
Read
Agile Software Development, Principles, Patterns, and Practices (Robert C. Martin)
Clean Code (Robert C. Martin)
The Pragmatic Programmer (Andrew Hunt&David Thomas)
Well if you're here I would hope by now you have the basics down:
OOP Best practices
Design patterns
Application Security
Database Security/Queries/Schemas
Most notably developers should strive to learn multiple programming languages and disciplines, in order for their skill set to be expanded in more than one direction. They don't need to become experts in these other skills but at least have a very acute understanding of integration with their central discipline. This will make them much better developers in the long run, and also let them gain the ability to use all tools at their disposal to create applications that can transcend the limitations of a singular language.
Outside of programming specific topics, you should also learn how to work under Agile, XP, or other team based methodologies in order to be more successful while working in a team environment.
I think an advanced programmer should know how to get your employer to give you the time & money to acquire training on as many advanced programming topics that you can eat in a year. I'm not advanced yet. :)
I'd suggest an Artificial Intelligence class at a college/university. Most of the stuff is fun, easy to grasp (the basics at least), and the solutions to problems are usually creative.
Hitchhikers Guide to the Galaxy.
How would I prefer to acquire the training? I'd love to have a substantial amount of company time dedicated to self-training.
I totally agree on Accessabiitly. I was asked to look into it for the website at work and there is a real lack of good knowledge on the subject, not only a lack of CSS standards to aid in the likes of screen readers.
However my answer goes to GUI design - its quite a difficult thing to get right. There's too many awful applications out there that could be prevented just by taking the time to follow HCI (Human Computer Interaction) advice/designs. Take Google/Apple for inspiration when making a GUI - not your typical hundreds of buttons/labels combo that too often gets pushed out.
Automated testing: Unit testing, functional integration testing, non-functional testing
Compiler details (more relevant on some platforms than others): How does the compiler implement certain common constructs in language X? On a byte-code interpreted platform, how does JIT compilation work? What can be JIT-compiled (for example, can virtual calls be JIT compiled?)?
Basic web security
Common design idioms from other problem domains than the one you're working in at the moment.
I'd recommend learning about Refactoring, Test Driven Development, and various unit testing frameworks (NUnit, Visual Test, CppUnit, etc.) I'd also learn how to incorporate automated unit testing into your continuous integration builds.
Ultimately if you can prove your code does what it claims it can do, you don't have to be there to answer questions as to why or how. If a maintainer comes along and tries to "fix" your code, they'll know instantly if they broke it. Tests written around the requirements (use cases) explain to the maintainer what your users wanted it to do, and provide a little working example of how to call it. Think of unit tests as functional documentation.
Test Driven Development (TDD) is a more novel design approach that begins with the requirements, where you start by writing a test before you write the code. You then write exactly enough code required to pass the test. You have to stop before you write extra code (that you may never need), because you will refactor it later if you find that you really needed it.
What makes TDD cool is that a bad interface (such as one with lots of dependencies) is also very hard to write tests for. It's so hard that a coder would rather refactor the interface to make it easier to test. And that refactoring simplifies the code, removing inappropriate dependencies, or grouping related tests together to make it easier to test, thus improving cohesion. By making it immediately apparent to the developer when he's writing a badly interfaced module, the developer sticks to the architecture and gravitates to the principles of tight cohesion and loose coupling. Good interfaces are the natural result. And as a bonus, once you pass all your tests, you know you're done.
On the surface this seems like an easy question to answer, just enter your favorite pet peeve about what other developers can't do correctly. But when I read through the answers and gave it some thought, I realized that every "advanced topic" brought up was covered in my undergraduate computer science curriculum--20 years ago. And I doubt that OO, security, functional programming, etc. concepts have changed in that time. Sure the tools have, but I argue that tools are different than topics.
So what is an "advanced topic" in computer science? Who is the Turing, Knuth, Yourdon of the 21st century?
I don't have a clear answer to this question, though I'd like to see more work on theories for parallel programming that will enable tools to abstract that messy stuff for developers.
Quite funny that noone hasnt mentioned:
debugging.
tools & ide you work with
and platform you are developing to.
Everyday development is much more fun if you know your tools really well and you accomplish more and make your life easier if you know how to debug someone elses code at ease.
Source Control
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We have a series of closed source applications and libraries, for which we think it would make sense opening up the source code.
What has been blocking us, so far, is the effort needed to clean up the code base and documenting the source before opening up.
We want to open up the source only if we have a reasonable chance of the projects being successful -- i.e. having contributors. We are convinced that the code would be interesting to a large base of developers.
Which factors, excluding the "interestingness" and "usefulness" of the project, determine the success of an open source project?
Examples:
Cleanliness of code
Availability of source code comments
Fully or partially documented APIs
Choice of license (GPL vs. LGPL vs. BSD, etc...)
Choice of public repository
Investment in a public website
There are a several things which dominate the successfulness of code. All of these must be achieved for the slightest chance of adoption.
Market - There must be a market for your open source project. If your project is a orange juicer in space, I doubt that you'll be very successful. You must make sure your project gets a large adoption amongst users and developers. It is twice as likely to succeed if you can get other corporations to adopt it as well.
Documentation - As you touched on earlier documentation is key. Amongst this documentation is commented code, architectural decisions, and API notes. Even if your documentation contains bugs, or bugs about your software it is ok. Remember, transparency is key.
Freedom - You must allow your code to be "free" - by this I mean free as in speech, not as in beer. If you have a feeling your market is being a library for other corporations a BSD license is optimal. If your piece of software is going to run on desktops then GPL is your choice.
Transparency - You must write software in a transparent environment. Once you go open source there is no hidden secrets. You must explain everything you do, and what you are doing. This will piss off developers like no other
Developer Community - A strong developer community is required. This must be existing. Only about 5% of users contribute back to the project. If someone notices there haven't been any releases for a year they wont think "Wow, this piece of software is done," they will think "developers must of dumped it." Keep your developers working on it, even if it means they are costing you money.
Communications - You must make sure you community is able to communicate. They must be able to file bugs, discuss workarounds, and publish patches. Without feedback, it is pointless to opensource the project
Availability - Making your code easy to get is necessary, even if it means pissing off lawyers. You have to make sure your project is easy to download, and utilize. You don't want the user to have to jump through 18 nag screens and sign a contract in order to do this. You have to make things simple, and clean
I think that the single most important factor is the number of users that are using your project.
Otherwise its just a really well written, usefull and well documented bunch of stuff that sits on a server not doing very much...
To acquire contributors, you first need users, then you need some incompleteness. You need to trigger the "This is cool, but I really wish it had this or was different in this way." If you are missing an obvious feature, it's extremely likely a user will become a contributor to add it.
The most important thing is that the program be good. If its not good, nobody will use it. You cannot hope that the chicken-and-egg will reverse and that people will take it for granted until it becomes good.
Of course, "good" merely means "better than any other practical option for a significant number of people," it doesn't mean that its strictly the best, only that it has some features that make it, for many people, better than other options. Sometimes the program has no equivalent anywhere else, in which case there's almost no requirement in this regard.
When a program is good, people will use it. Obviously, it has to have a market among users--a good program that does something nobody wants isn't really good no matter how well its designed. One could make a point about marketing, but truly good products, up to a point, have a tendency to market themselves. Its much harder to promote something that isn't good, so clearly one's first priority should be the product itself, not promoting the product.
The real question then is--how do you make it good? And the answer to that is a dedicated, skilled development team. One person can rarely create a good product on their own; even if they're far better than the other developers, multiple perspectives has an incredibly useful effect on the project. This is why having corporate sponsors is so useful--it puts other developers' (from the corporation) minds on the problem to give their own opinion. This is especially useful in the case that developing the program requires significant expertise that isn't commonly available in the community.
Of course, I'm saying this all from experience. I'm one of the main developers on x264 (currently the most active one), one of the most popular video encoders. We have two main developers, various minor developers in the community that contribute patches, and corporate sponsorship from Joost (Gabriel Bouvigne, who maintains ratecontrol algorithms), from Avail Media (who I work for sometimes on contract and who are currently hiring coders on contract to add MBAFF interlacing support), and from a few others that pop up from time to time.
One good developer doesn't make a project--many good developers do. And the end result of this is a program that encodes video faster and at a far better quality than most commercial competitors, hardware or software, even those with utterly enormous development budgets.
In looking at these issues you might be interested in checking out the online version of a course on open source at UC Berkeley, called Open Source Development and Distribution of Digital Information: Technical, Economic, Social, and Legal Perspectives. It’s co-taught by Mitch Kapur (Lotus founder) and Paula Samuelson, a law school professor. I have a long commute and put the audio of the course on my iPod last year – they talk a lot about what works, what doesn’t and why, from a very broad (though obviously academic) perspective.
Books have been written on the subject. In fact, you can find a free book here: producing open source software
Really, I think the answer is 'how you run the project'.
All of your examples matter, yes, but the key things are how the interaction between developers is managed, how patches etc are handled/accepted, who's 'in charge' and how they handle that responsibility, and so on and so forth.
Compare and contrast (the history isn't hard to track down!) the management of the development of Class::DBI and DBIx::Class in Perl.
I was just reading tonight an excellent post on the usability aspect of successful vs unsuccessful open source projects.
Excerpt:
A lot of bandwidth has been wasted arguing over the lack of usability in open-source software/free software (henceforth “OSS”). The debate continues at this moment on blogs, forums, and Slashdot comment threads. Some people say that bad usability is endemic to the entire OSS world, while others say that OSS usability is great but that the real problem is the closed-minded users who expect every program to clone Microsoft. Some people contend that UI problems are temporary growing pains, while others say that the OSS development model systematically produces bad UI. Some people even argue that the GPL indirectly rewards software that’s difficult to use! (For the record, I disagree.)
http://humanized.com/weblog/2007/10/05/make_oss_humane/
Just open-source it. Most probably, nobody will start contributing yet. But at least you can write on the press-releases that your product is GPL or whatever.
The first step is that people start using it...
And maybe then, after users get comfortable, they will start contributing.
Everyone's answers have been good so far, but there's one thing missing and that's good oversight. Nothing kills an open source project faster than not having some sort of project management. Not to tell people what to do so much as to just add some structure and tasking for the developers you are hoping to attract.
Disorganized projects fall apart fast. It's not a bird you just let go and watch it fly away.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am a student studying software development, and I feel programming, in general, is too broad of a subject to try to know everything. To be proficient, you have to decide which areas to focus your learning and understanding. Certain skill sets synergize with each other, like data-driven web development and SQL experience. However, all the win32 API experience in the world may not directly apply to linux development. This leads me to believe, as a beginning programmer, I should start deciding where I want to specialize after I have general understanding of the basic principles of software development.
This is a multi-part question really:
What are the common specializations within computer programming and software development?
Which of these specializations have more long-term value, both as a foundation for other specializations and/or as marketable skills?
Which skill sets complement each other?
Are there any areas of specialization that hinder your ability of developing other areas of specialization.
Ben, Almost all seasoned programmers are still students in programming. You never stops learning anything when you are a developer. But if you are really starting off on your career then you should be least worried about the specialization thing. All APIs, frameworks and skills that you expect that gives you a long term existence in the field is not going to happen. Technology seems changing a lot and you should be versatile and flexible enough to learn anything. The knowledge you acquire on one platform/api/framework doesn't die off. You can apply the skills to the next greatest platform/api/framework.
That being said you should just stop worrying about the future and concentrate on the basics. DataStructures, Algorithm Analysis and Design, Compiler Design, Operating system design are the bare minimum stuff you need. And further you should be willing to go back and read tho books in those field any time in your career. Thats all is required. Good luck.
Sorry if I sounded like a big ass advisor; but thats what I think. :-)
Not to directly reject your premise but I actually think being a generalist is a good position in programming. You will certainly develop expertise in specific areas but it is likely to be a product of either personal interest or work necessity. Over time the stuff you are able to transfer across languages and problem domains is at the heart of what makes good programmers.
I think the more important question is: What areas of specialization are you most interested in?
Once you know, begin learning in that area!
I would think the greatest skill of all would be to adapt with the times, because if your employer can see this potential in you then they would be wise to hold on tightly.
That said, I would advise you dive into the area YOU would enjoy. Learning is driven by enthusiasm.
Since my current employ is with an internet provider, I've found networking knowledge particularly helpful. But someday I'd like to play with 3D graphics (not necessarily games).
Go as deep as you can starting off in one environment, win32, .net, Java, Objective C... whatever.
It is important to build the deep understanding of how X works... so that you can translate the same concepts into other languages or platforms/environments, if you so desire.
"Are there any areas of specialization that hinder your ability of developing other areas of specialization." Sort of, but nothing permanent i think.
Since I am relatively green myself (less than 4 years) I come from a really OOP mindset. I've rarely jumped out of .NET, so I had a hard time on one job when coming into contact with embedded code. With embedded programmers fearing object creation and the performance loss of inheritance. I had to learn the environment, seriously low memory and slow clock times, they were coming from. Those are times to grow, I had a better time at it because i understood my area pretty well.
I will say if you pick something to specialize in for marketability and money, you will probably burn out fast. If you do start to specialize pick something you enjoy. I love GUI programing and hate server side stuff, my buddy is the opposite, but we both love our jobs. If he had to do my job, and I his, we would both go insane out of boredom.
As a student I'd recommend forgetting about what you're programming and focusing on the software process itself. Understand how to analyse a problem and ask the right questions; learn every design pattern you can and actually apply them all to gain a real understanding and appreciation of object-oriented design; write tests and then code only as much as you need to in order to make the tests pass. I think the best way to really learn is to just code as much as you can - the language and the domain aren't important, browse sourceforge and freshmeat for any interesting-sounding projects and get involved. What's important is understanding the fundamentals of software engineering.
And yes, this includes C. Or Assembler. This is the easiest way to get a good understanding of how your computer works and what your high-level code is actually doing.
Finally, never stop learning - Service-oriented architecture, inversion of control, domain-specific languages, business process management are all showing huge benefits so they're important to be aware of - But by the time you finish studying and join the workforce who knows what the next big thing will be?