How can I support the support department better? - language-agnostic

With the best will in the world, whatever software you (and me) write will have some kind of defect in it.
What can I do, as a developer, to make things easier for the support department (first line, through to third line, and development) to diagnose, workaround and fix problems that the user encounters.
Notes
I'm expecting answers which are predominantly technical in nature, but I expect other answers to exist.
"Don't release bugs in your software" is a good answer, but I know that already.

Log as much detail about the environment in which you're executing as possible (probably on startup).
Give exceptions meaningful names and messages. They may only appear in a stack trace, but that's still incredibly helpful.
Allocate some time to writing tools for the support team. They will almost certainly have needs beyond either your users or the developers.
Sit with the support team for half a day to see what kind of thing they're having to do. Watch any repetitive tasks - they may not even consciously notice the repetition any more.
Meet up with the support team regularly - make sure they never resent you.

If you have at least a part of your application running on your server, make sure you monitor logs for errors.
When we first implemented daily script which greps for ERROR/Exception/FATAL and sends results per email, I was surprised how many issues (mostly tiny) we haven't noticed before.
This will help in a way, that you notice some problems yourself before they are reported to support team.

Technical features:
In the error dialogue for a desktop app, include a clickable button that opens up and email, and attaches the stacktrace, and log, including system properties.
On an error screen in a webapp, report a timestamp including nano-seconds and error code, pid, etc so server logs can be searched.
Allow log levels to be dynamically changed at runtime. Having to restart your server to do this is a pain.
Log as much detail about the environment in which you're executing as possible (probably on startup).
Non-technical:
Provide a known issues section in your documentation. If this is a web page, then this correspond to a triaged bug list from your bug tracker.
Depending on your audience, expose some kind of interface to your issue tracking.
Again, depending on audience, provide some forum for the users to help each other.
Usability solves problems before they are a problem. Sensible, non-scary error messages often allow a user to find the solution to their own problem.
Process:
watch your logs. For a server side product, regular reviews of logs will be a good early warning sign for impending trouble. Make sure support knows when you think there is trouble ahead.
allow time to write tools for the support department. These may start off as debugging tools for devs, become a window onto the internal state of the app for support, and even become power tools for future releases.
allow some time for devs to spend with the support team; listening to customers on a support call, go out on site, etc. Make sure that the devs are not allowed to promise anything. Debrief the dev after doing this - there maybe feature ideas there.
where appropriate provide user training. An impedence mismatch can cause the user to perceive problems with the software, rather than the user's mental model of the software.

Make sure your application can be deployed with automatic updates. One of the headaches of a support group is upgrading customers to the latest and greatest so that they can take advantage of bug fixes, new features, etc. If the upgrade process is seamless, stress can be relieved from the support group.

Similar to a combination of jamesh's answers, we do this for web apps
Supply a "report a bug" link so that users can report bugs even when they don't generate error screens.
That link opens up a small dialog which in turn submits via Ajax to a processor on the server.
The processor associates the submission to the script being reported on and its PID, so that we can find the right log files (we organize ours by script/pid), and then sends e-mail to our bug tracking system.

Provide a know issues document
Give training on the application so they know how it should work
Provide simple concise log lines that they will understand or create error codes with a corresponding document that describes the error

Some thoughts:
Do your best to validate user input immediately.
Check for errors or exceptions as early and as often as possible. It's easier to trace and fix a problem just after it occurs, before it generates "ricochet" effects.
Whenever possible, describe how to correct the problem in your error message. The user isn't interested in what went wrong, only how to continue working:
BAD: Floating-point exception in vogon.c, line 42
BETTER: Please enter a dollar amount greater than 0.
If you can't suggest a correction for the problem, tell the user what to do (or not to do) before calling tech support, such as: "Click Help->About to find the version/license number," or "Please leave this error message on the screen."
Talk to your support staff. Ask about common problems and pet peeves. Have them answer this question!
If you have a web site with a support section, provide a hyperlink or URL in the error message.
Indicate whether the error is due to a temporary or permanent condition, so the user will know whether to try again.
Put your cell phone number in every error message, and identify yourself as the developer.
Ok, the last item probably isn't practical, but wouldn't it encourage better coding practices?

Provide a mechanism for capturing what the user was doing when the problem happened, a logging or tracing capability that can help provide you and your colleagues with data (what exception was thrown, stack traces, program state, what the user had been doing, etc.) so that you can recreate the issue.
If you don't already incorporate developer automated testing in your product development, consider doing so.

Have a mindset for improving things. Whenever you fix something, ask:
How can I avoid a similar problem in the future?
Then try to find a way of solving that problem.

Related

Developing using pre-release dev tools

We're developing a web site. One of the development tools we're using has an alpha release available of its next version which includes a number of features which we really want to use (ie they'd save us from having to implement thousands of lines to do pretty much exactly the same thing anyway).
I've done some initial evaluations on it and I like what I see. The question is, should we start actually using it for real? ie beyond just evaluating it, actually using it for our development and relying on it?
As alpha software, it obviously isn't ready for release yet... but then nor is our own code. It is open source, and we have the skills needed to debug it, so we could in theory actually contribute bug fixes back.
But on the other hand, we don't know what the release schedule for it is (they haven't published one yet), and while I feel okay developing with it, I wouldn't be so sure about using it in production so if it isn't ready before we are then it may delay our own launch.
What do you think? Is it worth taking the risk? Do you have any experiences (good or bad) of similar situations?
[EDIT]
I've deliberately not specified the language we're using or the dev-tool in question in order to keep the scope of the question broad, as I feel it's a question that can apply to pretty much any dev environment.
[EDIT2]
Thank you to Marjan for the very helpful reply. I was hoping for more responses though, so I'm putting a bounty on this.
I've had experience contributing to an open source project once, like you said you hope to contribute. They ignored the patch for one year (they have customers to attend of course, although they don't sell the software but the support). After one year, they rejected the patch with no alternative solution to the problem, and without a sound foundation to do that. It was just out of their scope at that time, I guess.
In your situation I would try to solve one or two of their not-so-high priority, already reported bugs and see how responsive they are, and then decide. Because your success on deadlines will be compromised to theirs. If you have to maintain a copy of their artifacts, that's guaranteed pain.
In short: not only evaluate the product, evaluate the producers.
Regards.
My personal take on this: don't. If they don't come through for you in your time scale, you're stuck and will still have to put in the thousands of lines yourself and probably under a heavy time restriction.
Having said that, there is one way I see you could try and have your cake and eat it too.
If you see a way to abstract it out, that is to insulate your own code from the library's, for example using adapter or facade patterns, then go ahead and use the alpha for development. But determine beforehand what the latest date is according to your release schedule that you should start developing your own thousands of lines version behind the adapter/facade. If the alpha hasn't turned into an RC by then: grin and bear it and develop your own.
It depends.
For opensource environments it depends more on the quality of the release than the label (alpha/beta/stable) it has. I've worked with alpha code that is rock solid compared to alleged production code from another producer.
If you've got the source then you can fix the any bugs, whereas with closed source (usually commercially supported) you could never release production code built with a beta product because it's unsupported by the vendor who has the code, and so you can't fix it.
So in your position I'd be assessing the quality of the alpha version and then deciding if that could go into production.
Of course all of the above doesn't apply to anything even remotely safety critical.
It is just a question of managing risks. In open source, alpha release can mean a lot of different things. You need to be prepared to:
handle API changes;
provide bug fixes and workarounds;
test stability, performance and scalability yourself;
track changes much more closely, and decide whether to adopt then yet;
track the progress they are making and their responsiveness to patches/issues.
You do use continuous integration, do you?

Best Practices for Setup and Management of an Open Source Project

Later this year I want to release a PHP framework that I've been working on as open source. I do use source control (SVN), but it's on an extremely limited basis. I'm self-taught, I develop by myself and don't have the experience of working with large teams. I have some ideas about what can help make a project successful, but I'm fuzzy on some of the details. Since it's not yet released, I want to do everything I can to set up the right infrastructure from the beginning. What do I need to know in order to setup and manage a successful project?
Some ideas that I have to make it successful (beyond marketing it):
Good documentation and tutorials
Automated unit tests and builds to
push update to the website
A clear roadmap
Bug Tracking integrated with the
source control
A style guide to keep the code
consistent
A forum for the community to get
support, share ideas, etc.
A good example application built with
the framework
A blog to keep the community informed
Maintaining backwards compatibility
wherever possible
Some of my questions:
How do I setup and automate a one
step submit-test-commit-generate API
docs-push update to website process? Edit: Would Ant or Maven be good candidates for this? If so, do you know of any resources for setting up a PHP project using them?
How do I handle (technically)
submissions from other users? How can
I ensure that those submissions must
be approved before being integrated?
What are some of the pitfalls that
can be avoided in terms of the
project community? I'd prefer to have
it be as friendly and helpful as
possible without a lot of drama.
I'd love to learn from your experience on any of these points. If you think I'm missing anything big, please share that as well. Any resources (preferably geared toward a beginner) that you could point me towards would also be greatly appreciated.
I'm just getting started in community projects, but I'll give you some advice on what I know.
How do I setup and automate a one step submit-test-commit-generate API docs-push update to website process?
I've never implemented it as one process. You could just have a checklist, and possibly even create some scripts to do certain tasks. I've never worked with any source control that automates the uploading and such to be done by a script. Most of the time, there is some web interaction to be involved.
You don't want to push API changes until it's an official release.
EDIT: Working Environment
For PHP, most of the time, I either edit directly on the server and test it there, using a beta.example.com, or similar, before pushing to example.com. You could also set up an web environment on your home PC (using XAMPP for Windows, or the standard LAMP installation on Linux). You would probably just use a mirror of your repository here, so you'd do svn commit, or whichever is appropriate for the VCS or DVCS you choose.
The fun part is testing this with different PHP versions. I've not done this myself, but you could probably use a .htaccess file to run a different PHP binary in order to test it out. I'm not really sure what the best option is for this is.
I've not done much with API, as I've never created a library, but just doing a quick search I found http://www.phpdoc.org/. It looks like a mature project, so that might be a starting point.
As far as creating releases go, I generally create a script that only includes the files that are part of the distribution (it will filter out any VCS files, and anything that you don't want in the distributed file). You could write a script around find on linux (which is what I do most of the time), or there may be other better options.
How do I handle (technically) submissions from other users? How can I ensure that those submissions must be approved before being integrated?
This is mostly handled by the bug tracker, and limited access in the Version Control System. Usually, you, and the people you allow, can commit to the VCS. Other users can submit patches, but then you might have someone review the patch, test the patch, and commit. You could split these tasks up as a team, or assign a patch to one person and have them do it all.
What are some of the pitfalls that can be avoided in terms of the project community? I'd prefer to have it be as friendly and helpful as possible without a lot of drama.
I would just make sure to keep it as positive as possible with the project members and community. There's going to be some disagreements, and it will drive a few people away, but as long as you have a stable product that meets the needs of most people, I think that's all that anyone can expect.
One minor suggestion that's worked well for me: start using first-person plural pronouns, rather than singular ones. That is, talk about "we" and "us" rather than "I" and "me." It encourages other people to participate when they feel like part of team, rather than when they feel like they're contributing your own self-aggrandizement.
The most important thing you have to do is to attract users. Without users, you won't get any contributions and developers helping you out. Because developers are users first, and then they decide to extend/fix something they use and might become contributors.
So to get users, you should consider
describe what your framework does in one or two sentences at the top of your project page
mention how your framework can be used and for what, what situations it is most useful for
add a lot of examples on how to use it
mention whether your framework is stable, beta or alpha. That's important because user need to know that before they start using it
also mention whether you want to keep improving it and keep working on it - most users don't want to use a framework that's abandoned (also keep in mind that a lot of users check your commits to see whether you really are working on it - if your last commit to the repository was months ago then you're not really working on it, so cheating isn't possible)
If you got all this, and people start submitting patches, you can use a patch tool to apply those to your source. Depending on your version control system, you can either use the GNU patch, a diff/patch tool that comes with your version control or maybe even a GUI tool that helps you with this. SVN doesn't have a patch tool (yet), but 'svn diff' will create a patchfile which you can then apply with the GNU patch tool, or in case you're using TortoiseSVN, right-drag the patchfile to your working copy and have TortoiseMerge apply it for you.
And on how to best deal with the community:
answer questions in time, don't wait more than two or three days to answer questions
try to be nice, even with upset and angry people. Only if they keep bothering tell them to (still in a nice way if possible) go elsewhere
always keep discussions about the project on a mailing list. You don't want to repeat the same discussions over and over again - if you have a mailing list, just point users to the archives before the discussion starts all over again
And you should watch the talk "How Open Source Projects Survive Poisonous People (And You Can Too)" - it's really good and tells you a lot on how to deal not just with 'poisonous people' but also how to deal with all people involved in your project.
I'd like to add that you should make it as easy as possible for your users to get the whole thing running and modify the code - these 'power users' can be 'converted' into developers or at least people who send smaller patches.
Don't try to do it all yourself - for open source projects there are several hosting providers that solve most of the problems. I recommend codeplex or google code.
Setting up build scripts will depend a certain amount on what platform you set up, but in general it's easy to add any tool you want into the script once you start using any sort of build script.
If you really need the one step process you describe, you need a build server. I use TeamCity, which I have set up to watch for any changes in svn and trigger build/test whenever something is checked in. The build server will generally be able to perform any steps that you put into the build script.
Read up on Git as an alternative to SVN
free public repository/bug tracker/wiki/fork-enabled community in Github (which hosts symfony and PHPUnit amongst others)
"How do I handle (technically) submissions from other users? How can I ensure that those submissions must be approved before being integrated?" - with Git, pull what you/your closest team finds most interesting to the master branch
Consistent API
be inspired of other public API:s
only change in major versions
guessable
Interesting for both users & developers
clear goal (your roadmap - excellent)
useful, contra everything else available
easy to use, but still not easy-enough-to-write/maintain-yourself
You could check out either Ant or Phing to build your project. Include CodeSniffer in the build and you'll save time checking for basic formatting errors/differences.
These are all technical tips, about the soft part... treat humans with respect, a lot of interest and be overly excited about their contributions and make them feel that they're not wasting their time. That would appeal to me.
Take a look at Karl Fogel's book on Producing Open Source Software. It probably has everything that you asked.
You should also plan for engaging the community. I'd recommend reading Jono Bacon's The Art of Community [http://www.artofcommunityonline.org/].
You have a great set of ideas to start. You might have to start by trimming them down! Ask yourself what's necessary for a first release.
For automating the builds and tests, the scripting can be done with ant, maven or phing for PHP projects.
You'll probably need a host so you can demo the product. For PHP that is pretty easy to find.
You need an open source hosting provider-- especially github (but also google code, source forge, etc). Github provides bug tracking, default licenses, blog and great mechanisms for accepting changes from the community. Built on git, it facilitates distributed projects quite well.
Although it's good to have a one-step build and install in place, automating integration of others changes probably isn't important (or desirable) off the bat.
Good luck!

What's the difference between a bug tracking and an issue tracking system?

I'm looking for both an explanation of why and when you would use each system and what features differentiate a bug vs. issue tracking application.
Issue tracking systems usually integrate more with customers and customer issues. An issue could be "help me install this" or "How do I get the fubar into the flim flam." They could even be something like "I need an evalutation key for your software".
Bug tracking systems help you keep track of wrong or missing things from the program.
When looking at web systems, there is usually a big difference in focus, either helping customers or tracking problems with your software.
The difference could be clearer from the following example.
Suppose you had a production issue today that affected 5 customers, but was caused by a single software defect.
In your issue-tracking system, you opened 5 tickets and started tracking what each customer reported, what was communicated to them, when the software patch was applied, etc. You can track that kind of stuff separately for each customer.
In your bug-tracking system, you made 1 entry for the software defect started tracking things like steps to reproduce, code changes, etc.
Customer issues can be closed whenever they're remedied to the customer's satisfaction and that may or may not involve fixing the software. The bug can be closed when it's fixed and retested.
Two systems, outward- and inward-facing, tracking two different kinds of things, each with its own life cycle.
Bug tracking systems like Trac are designed to have one ticket for each problem intrinsic to the program, so a ticket is closed by modifying the program.
Customer support ticket systems like IssueTrackerProduct are designed to have one ticket for each customer experiencing a situation, so a ticket is closed by working out the situation for that customer (possibly by modifying the program).
For examples of each, see Wikipedia's Comparison of issue tracking systems
A bug is a subclass of issue. All bugs are issues, but not all issues are bugs.
Typically a bug is a defect in the codebase. This is different from an incomplete/yet-to-be implemented feature, or something more hard to pin down like a developer putting in a ticket to deal with a piece technical debt, or a concern with the UI. All of these are 'issues' semantically speaking.
A generic issue, when not falling under those other categories, is more often than not a representation of something reported by the end-user. In most systems, this reported issue is handled as a bug-report in itself. I'd venture to say this is a mistake.
The tricky part is that sometimes multiple issues may be related to other issues. It could be concerning the same bug, multiple bugs, or actually be a feature request. That is to say, there can be a many-to-many relationship between issues.
Why does the distinction matter? Well, there is a natural tree internally - Resolving one issue can indirectly complete (or contribute to completing) a million other issues. It also makes a difference in how an issue is resolved. Defects themselves may be resolved with a code change that fixes it, or makes it irrelevant. If it's a user complaint, it may be resolved by sending them a work around, and then left to be followed up on when the original defect is solved.
Features that work better at representing and working with these nuances in a useful way is really what to look for in a ticket tracking system.
At some point, you are talking about processes and methodologies more than actual ticketing systems, and the actual names of things should start to become irrelevant. Mainstream and enterprise oriented solutions tend to run on a popular systems like ITIL, but you can get away with adhoc stuff provided everyone on the team has a good understanding of customer service needs. I personally see it as a waterfall (ITIL) vs agile (DevOps) situation.
it's just semantics. A bug is a problem, an issue is something to do. They are otherwise much the same.
Its' a fuzzy line at best. Issue tracking system would probably be considered the more general of the two. In that all bug tracking systems are issue tracking systems, but not necessarily the opposite.
From our Friend Wikipedia
A bug tracking system is a software
application that is designed to help
quality assurance and programmers keep
track of reported software bugs in
their work. It may be regarded as a
sort of issue tracking system.
A bug is found in code
An issue can be found anywhere, in the processes, in hardware, in people.
It depends which development process you're adopting as to what the definitions mean.
I believe that a bug is something that can be fixed in code, while an issue is more of a problem with usability.
For example, a login form. A bug in the login form would be the form redirecting incorrectly after the login completes. While an issue would be that the overall login process is too slow, or there is no option to email a forgotten password.
This isn't really a complete answer to your question, but I've had similar questions come up with dealing with customers. I think at the highest level, a bug tracking system seems usually to be more developer focused. That is, developers are trying to track problems in the code. A function isn't returning the right value, more validation should be done, etc.
A good example of a system that integrates nicely with code is Trac.
Issue tracking systems seem to be more customer-centric. For example, being able to have a customer say "When I click on 'OK" I get an error". It may be user training, it may be a feature, or it may in fact be a bug.
So in many of the projects that I've worked on we keep these distinct. We have a high-level issue tracking system that may or may not result in an actual bug being created in the bug tracking system. However, many many bugs are tracked internally without any "issues" being created in the issue tracking system.
The problem that I see between these two is that it's really not very easy for inexperienced users to enter tickets into something like Trac because they get confused by the technical lingo. However, a high-level issue tracking system does not integrate tightly with code so it's useless to the developers.
Anyway... my $0.02.
Bugs: flaws anywhere within the process (application, database, reporting, etc.) that will prevent 100% of desired functionality from occurring. Also known and referred to as defects.
Issues: potentially caused by a bug or bugs, an issue is a report of some form of loss of functionality in the system that would be tied to a user. These are also referred to as help desk tickets in some organizations.
WIKIPEDIA LINKS
- Software Bug
- Issue Tracking
To answer this question it requires context and from the looks of it Alan's answer was to your context.
In the world of software testing, one of the distinctions we make between an issue and a bug are: bugs are anything that threatens the value of the product while issues are anything that threatens of the value of testing (or the value of the project and in particular the value of testing). Rapid Software testing teaches us that.
In my experience the tracking systems allow you to make whatever distinction you want between the two. How you use a particular tracking system is up to you.
I don't think there is a definitive answer, but I usually just think of Issue Tracking as merely a more generic term that corresponds to more than just "bugs". To only use the term "Bug Tracking" is kind of a pigeon-hole, which is associated with defects in software.
An issue tracker doesn't have to be tied to software though, and even BugZilla doesn't track only bugs, but also new enhancement / feature requests, votes, etc. In that way, I think of an "issue" as just a single item of interest that someone wants to get "done."
Lately there has also been a rise in Work Item Tracking (in e.g. Visual Studio and IBM/Rational Jazz), which is more lower level than "issues"--wherein an issue could be seen as requiring some N number of smaller work items to complete. At a higher level, you might also see something akin to a Milestone in BugZilla.
Bugs are specific to software developers. Issues are more general and can include all team member's progress on a project, including the graphic designers, system administrators, company executives, etc.
An issue tracker speaks in terms of things to do and can categorize an item as a bug if needed.
It is mostly just silly words, but I use an "issue tracker" as I work with many people who are not programmers, and we need to speak a common language by having a common productivity tool that makes us aware of what each other is doing.
You can use a bug tracker but it will just confuse non developers, especially if they have to think of their tasks as being a bug.
I would say it is also nice to draw a difference between a bug and an issue for programmers, as bugs are usually problems with existing code, and issues can be new feature requests.
Well... there is not difference besides the fact, that an issue is more than just a bug. It can be a task, a new feature, or simply an improvement. A bug is mostly seen as incorrect system behavior, while an issue has a broader definition. beyond just "it does not work"...

Is it possible for open-source software to have viruses/spyware/malware? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Sorry if this is a stupid question but sometimes I see Easter eggs and stuff in programs like Aptitude. (the package manager for Debian)
Is it possible Is it likely that more sinister features make their way into open-source software?
It's certainly possible, but it's more complicated. I don't know of any actual malware going around, but people have made mistakes with similar effects. (I know of mistakes that have been found; obviously, I don't know how many haven't been.)
If you put malware into closed-source software, the only way to find it is to detect the effects and analyze the binary. There are people who are very good at analyzing the binary.
In open-source software, anybody can look at the source code. Not many will, for most packages, but there's a much higher chance of being found out. Once found out, anybody can patch the software to do the good things without the bad. Moreover, most open source software has publicly available repositories, which means that anybody can track down the history of the code, and (at least to a pseudonym) who did what. There is also a tendency to produce more readable code in open source, so that changes will stand out more.
The caveat, of course, is that most of us really don't know what to look for in software security. If I run a compression program, and it compresses my file to a shorter version that looks like gibberish, and I can get the original back, I know that's working. If it changes it to gibberish that it claims is encrypted, I don't know a priori how to tell if it's well encrypted.
It's possible but sort of harder because the source code is there. The author would be counting on no one bothering to read the source code before running it which is true for a lot of people I suppose. I know I don't bother to read the source code of the open source programs I run. In a larger project it's harder because the code is often reviewed but if there's just one author then it becomes a lot easier.
Any software can contain malicious parts (intentionally or unintentionally). The advantage of open source is that you can check it (if you like and have the time to do so).
Yes it's possible, see the Debian OpenSLL debacle for a nice example:
http://www.metasploit.com/users/hdm/tools/debian-openssl/
Although this is not a virus/spyware/malware, it clearly shows what could go wrong in open source software.
I can't believe nobody's mentioned Ken Thompson's compiler virus yet.
Having access to the source code offers a reasonable level of assurance that the program won't behave maliciously. However, unless you've inspected one of:
The binary output of the compiler
The binary code of the compiler itself
The source code of the compiler (and it's compiler, and it's compiler, etc.) and the binary code of the compiler used to compile it.
you could end up with malicious code in the compiled binary that never appears in the source. Admittedly it's a very unlikely and extremely difficult form of attack, but it's theoretically possible to introduce malicious code into an open source project in a way that cannot be detected in the source code (of either the project or the compiler).
If you're not working for the CIA (or the equivalent agency of another government), compiler security probably isn't something you have to worry about. But it is a very cool concept to think about.
I'd say that it obviously it is possible. All it requires is that code gets accepted without sufficient review. It's not hard to come up with scenarios permitting that, since reviewers are human.
The more interesting question then becomes how likely it is that malware gets accepted into some package where it can do harm. This is far harder to answer, unfortunately. We seem to be doing okay so far (knocks on wood), at least.
I guess Linus's law ("with enough eyes, all bugs are shallow") holds true, but it easy to think that just because something is open source, people will spend a lot of time eyeballing its code. That is not generally true, as far as I know.
EDIT: Changed the wording about Linus's law above, had it wrongly attributed.
Is it possible in principle? Of course. Any software can do things people don't want.
Is it possible in practice? The argument against is of course that the software is available, and that many eyes are looking at it, so it'll be discovered before it can do too much damage.
On the other hand, there is the Underhanded C Code Contest, http://www.underhanded-c.org/, in which you submit programs that intentionally misbehave, but where the cause of the bad behavior is not apparent from an inspection of the source.
Then there's of course the Debian SSL bug, where SSL keys generated with the OpenSSL library on Debian where quite insecure. This, apparently, was just an act of incompetence (Hanlon's Razor, everybody), but it shows how security problems can sneak into open source code. With weak keys and SSH access, you don't need a virus in the code, you just exploit weak code when it's running on production systems.
Take that as a yes/no/maybe :)
Yes it is possible, the same that it's possible for closed-source software to have the same occur (malicious developer on the team, etc)
It's arguably less likly with open-source though, as the moment anything like that is noticed, any other user can pull the problem code and it's no longer a problem.
Similar how closed source software can be viruses/spyware/malware, open source can be as well. As well as how there's tons of horrible open source software.
This far every closed source windows software I've seen has been some sort of malware though, so the bias is closed source apps have higher chances being crap in overall.
Who prevents it? Even if software would be open source, Only poor software allows anyone touch the release repository without authorization. Usually there are maintainer(s) who review all the incoming patches.
99% of all software produced and used is poor quality and bug ridden.
It is, but usually its noticed and removed before it becomes an issue. With any well maintained open source software there are many people who check each revision for any changes that were made.
Yes it is possible, it depends how carefully controlled commit access to the source code is and how carefully monitored those commits are. Some projects have a few lead developers who request patches from the community and commit this to the code base, other projects will grant access to many more developers. Equally some projects have a large number of people reviewing the source code as changes are made.
It is possible, indeed as it is code like any proprietary software. However, the main difference is that you -and the community- has access to the code, and this fact is enough to stop it from happening in almost all cases. Also, the vast amount of versions of libraries and kernel makes malware less likely to succeed.
Do you really know what you use? Do you check? Does typical user check or care?
For example google for the keywords: repository compromised or gpg repository compromised or something along these lines.
It is possible, but not very likely.
There's nothing special about open source code that makes it magically resistant to containing bad things, but open source which is actively developed by a group of people is very unlikely to contain malicious code, because someone would notice and blow the whistle.
In addition, in most open source projects it's possible to trace the history of any particular piece of code, by looking through the project's source repository, which means that the author of a malicious piece of code can be identified.
If in doubt, you can always review the code yourself, or hire someone to review it for you. Code review generally won't catch subtle bugs or errors, of course, but malicious code is likely to be more obvious.
Another example where open-source security is superior to that of closed source is the interbase backdoor.
From The Register:
A back door password has been hidden in Borland/Inprise's popular Interbase database software for at least seven years, potentially exposing tens of thousands of private databases at corporations and government agencies to unauthorized access and manipulation over the Internet, experts say.
The password was discovered when Interbase was made open source.
Which doesn't mean that security of open-source software is perfect, or even good. Who needs to insert malware into original software when there are thousands of remotely exploitable security holes all over the place?
Is it likely that more sinister features make their way into open-source software?
Officially no, it's relatively unlikely that malware features will get in and go unnoticed for long. But:
servers holding distribution sources can be (and have been) compromised so that what you're downloading doesn't correspond to the open-source development work;
in the case of binary distributions (usually for Windows), the installer for the software can be packaged with malware. Again, officially this happens quite rarely; one example is early versions of LimeWire, which installed a ‘shopping helper’ affiliate-fee-stealer BHO to “support the project”, and lost a lot of goodwill doing so;
but, there are also some scam artists who squat search results for well-known open source projects (again, most commonly with file-sharing software) and deliver their own tweaked installers bundled with malware. Always find the project's official site before downloading.
Just a reminder,
SSH Backdoored before
Wordpress backdoored before
both of them happened in a result of an attack, so it wasn't planted by original developers. I think this kind of proves that it can but not that likely and will get picked up quite soon if the application is popular enough.

To what extent should code try to explain fatal exceptions?

I suspect that all non-trivial software is likely to experience situations where it hits an external problem it cannot work around and thus needs to fail. This might be due to bad configuration, an external server being down, disk full, etc.
In these situations, especially if the software is running in non-interactive mode, I expect that all one can really do is log an error and wait for the admin to read the logs and fix the problem. If someone happens to interact with the software in the meantime, e.g. a request comes in to a server that failed to initialize properly, then perhaps an appropriate hint can be given to check the logs and maybe even the error can be echoed (depending on whether you can tell if they're a technical guy as opposed to a business user). For the moment though let's not think too hard about this part.
My question is, to what extent should the software be responsible for trying to explain the meaning of the fatal error? In general, how much competence/knowledge are you allowed to presume on administrators of the software, and how much should you include troubleshooting information and potential resolution steps when logging fatal errors? Of course if there's something that's unique to the runtime context this should definitely be logged; but lets assume your software needs to talk to Active Directory via LDAP and gets back an error "[LDAP: error code 49 - 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece]". Is it reasonable to assume that the maintainers will be able to Google the error code and work out what it means, or should the software try to parse the error code and log that this is caused by an incorrect user DN in the LDAP config?
I don't know if there is a definitive best-practices answer for this, so I'm keen to hear a variety of views.
The approach I tend to agree with is that you should explain as much as possible if the fatal error is caused by some code in your own responsibility (i.e. not third party). Otherwise if the error is caused "further down", for example at the database level, then the administrators should be passed up the error returned without adding much further information. So if the database server dies, then your connector with throw some exception, and you would log the error code in the exception.
The administrator or support staff should then have sufficient knowledge to resolve the issue with the provided information.
When you do provide too much details on errors which are not caused by your own code you run the risk of having error details NOT matching the cause of the actual error, especially if the error codes stop matching between versions.
Of course, there are exceptions. We have worked with open source libraries that were so poorly documented that we ended up writing wrappers around the libraries just to provide decent logging of what actually is going on.
Just my 2c
The answer, as for all broad questions, is "it depends."
If you're looking at a configuration error, then by all means you should try to explain what was wrong (in the logs). If it's an out-of-memory error, there's not much you can do -- and you may not even be able to write a log message.
One thing you said concerns me:
If someone happens to interact with
the software in the meantime, e.g. a
request comes in to a server that
failed to initialize properly, then
perhaps an appropriate hint can be
given to check the logs
If this is truly a fatal error, the server should not be running, and therefore any incoming request should fail with absolutely no warning or explanation.
You should at least provide the message from the exception and a stack trace so you can find out where in the code it occurred. If possible, you should also explain what you were attempting to do and what you think may have happened depending on the exception type.
I guess it depends on how much time you have before delivering the software to your customers.
Yes, it would be nice to parse the error and give a more explicit message but, in this day and age, Google is not always very far.
So unless, you have time to create the code to parse errors, I would leave them as is.
IMHO you can never provide too much information in these case.
In the real world it comes down to cost-benefit analysis, though. What's the impact of the error to you, your app, your business, etc. How much time is it worth spending on it.
In a business critical app my first point applies. Everything else is a sliding scale.
I think it depends on who is using the application.
If the application is used by tech savvy people then show more technical details, so they will be able to troubleshoot the problem if they want. I've had some users go to great lengths to solve issues. It can be very helpful, especially for issues that are specific to certain configurations.
If your user base is more of the average Joe then technical details will confuse them in most cases. You should show them a simple error message, and try to offer some solutions if possible.
You could also merge the two techniques. Show a simple error message by default and allow the user to view more detailed error information if they want.
You just don't want to overwhelm the user with too much information that they don't understand. It just frustrates and confuses them in the majority of cases.
There are two aspects I think all errors and exceptions should have:
1) Enough information in the error to help debug the problem. Stacktrace, class/method name, type of exception etc fall in this category.
2) A human understandable message, ideally clear enough for say Ops team or Sysadmins engineer to know who to call or forward that error message. Typically it is of the form "so and so module failed" or "network call failed" etc. Something that will come as close to you explaining the problem to customer, in non technical jargon.
Now with all the time constraints etc it may not be possible to have both messages programmed in. Then I would go out on a limb and say we should have the second type of error message. Remember, the sysadmin would probably be able to call you and since you helped write the code you can maybe pinpoint the error. But if the customer is on phone asking about the error, the sysadmin better be able to explain the possible cause :)
On a different note, all products need a clear exception/error handling mechanism decided at architecture level. And the exceptions NEED to adhere to that design. There are few things more frustrating than trying to debug an error based on a design only to find out a day later that its a one of a kind error message based on completely different design.
See https://meta.stackexchange.com/questions/3122/formatting-sandbox