Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Let's say your company has given you the time & money to acquire training on as many advanced programming topics that you can eat in a year, carte blanche. What would those topics be and how would you prefer to acquire them?
Assumptions:
You're still having deliverables to bring into existence, but you're allowed one week per month for the year for this training.
The training can come from anywhere. IE: Classroom, on-site instructor, books, subscriptions, podcasts, etc.
Subject matter can cover any platform, technology, language, DBMS, toolset, etc.
Concurrent/Parallel programming and multi-threading, especially with respect to memory models and memory coherency.. I think every programmer should be aware of the considerations in this arena as we move into a world of multi-core/multi-cpu hardware.
For this I would probably using Internet research most heavily; but an on-campus primer at a good university could be a good way to start off.
Security!
Far too many programmers just build something and think they can add security as an afterthought after finishing the "main" part of the program. You could always benefit from knowing more about how to secure your app, how to design software to be secure from the get-go, how to do intrusion detection, etc.
Advanced Database Development
Things like data warehousing (MDX, OLAP queries, star schemas, fact tables, etc), advanced performance tuning, advanced schema and query patterns, and the like are always useful.
Here are the three that I'm always finding myself explaining to junior developers who didn't get enough CS training. All that other stuff is generally more hype than substance, or can be fairly easily picked up. But if you don't know these three, you can do a great deal of damage:
Algorithm analysis, including Big O
Notation.
The various levels of
cohesion and coupling.
Amdahl's Law, and how it pertains to optimizations.
Internationalization issues, especially since it sounds like it would not be an advanced topic. But it is.
Accessibility
It's ignored by so many organizations but the simple fact of the matter is that there are a huge number of people with low or no vision, color blindness, or other differences that can make navigating the web a very frustrating experience. If everybody had at least a little bit of training in it, we might get some web based UIs that are a little more inclusive.
Object oriented design patterns.
I guess "advanced" is different for everyone, but I'd suggest the following as being things that most decent developers (i.e. ones that don't need to be told about NP-completeness or design patterns) could gain from:
Multithreading techniques that go
beyond "lock" and when to apply them.
In-depth training to learn and
habitualize themselves with clever
features in their toolchain (IDE/text
editor, debugger, profiler, shell.)
Some cryptography theory and hands-on experience with different common flaws in security schemes that people create.
If they program against a database, learn the internals of their database and advanced
query composition and tuning techniques.
Developers should know the basics in SQL development and how their decisions impact database performance. It is one thing to write a query it is another thing to write a query, understand the explain plan and make design decisions based off that output. I think a good course on PL/SQL development and database performance would be very beneficial.
Unfortunately communication skills seem to fall under the "advanced topics" section for most developers (present company excluded, of course).
Best way to acquire this skill: practice.
Take of the headphones, and talk to
someone instead of IM'ing or emailing
the guy at the next desk.
Pick up the phone and talk to a
client instead of lobbing an email
over the fence.
Ask questions at a conference instead of sitting behind your laptop
screen twittering.
Actively participate in a non-technical meeting at work.
Present something in public.
Most projects do not fail because of technical reasons. They fail because they could not create a team. Communication is vital to team dynamics.
It will not harm your career either.
One of the best courses I took was a technical writing course. It has served me well in my career.
Additionally: it probably does not matter WHAT the topic is - the fact that the organization is interested in it and is paying for it and the developers want to go and do go, is a better indicator of success/improvement than any one particular topic.
I also don't think it matters that much what the topic is. Dev organizations deal with so many things during a project that training and then on the job implementation/trial and error will always get you some better perspective - even if the attempts to try out/use the new stuff fail. That experience will probably help more on the subsequent projects.
I'm a book person, so I wouldn't really bother with instruction.
Not necessarily in this order, and depending on what you know already
OO Programming
Functional Programming
Data structures and algorithms
Parallel processing
Set based logic (essentially the theory behind sql and how to apply it)
Building parsers (I only put this, because it actually came up where I work)
Software development methodolgoies
NP Completeness. Specifically, how to detect if a problem is NP-Complete, and how to build an approximate solution to the problem.
I see this as important because you don't want a developer to try and solve an NP-complete problem by getting the optimum solution, unless the problem's search space is very small, in which case brute force is acceptable. However, as the search space increases, the time required to solve the problem increases exponentially.
I'd cover new technologies and trends. Some of the new technologies I'm researching/enhancing my skills with include:
Microsoft .NET Framework v3.0/v3.5/v4.0
Cloud Computing Frameworks (Amazon EC2, Windows Azure Services, GoGrid, etc.)
Design Patterns
I am from MS based developer world, so here is my take on this
More about new concepts in Cloud Computing (various API etc.). as the industry is betting on it for sometime.
More about LinQ for .net framework
Distributed databases
Refactoring techniques (which implies also learning to write a good set of unit/functional tests).
Knowing how to refactor is the best way to keep code clean -- it is rare when you get it right the first time (especially in new designs).
A number of refactorings, however, require a decent set of tests to check that the refactoring did not add unexpected behavior.
Parallel computing- the easiest and best way to learn it
Debugging
Debugging by David J. Agans is a good book on the topic. Debugging can be very complex when you deal with multi threaded programs, crashes, algorithms that doesn't work. etc. Everybody would be better off being good at debugging.
I'd vote for real-world battle stories. Have developers from other organizations present their successes and failures. Don't limit the presentations to technologies you're using. With a significantly complex project, this is bound to cut into 'advanced' topics you haven't even considered. Real-world successes (and failures) have a lot to teach.
Go to the Stack Overflow DevDays
and the ACCU conferences
Read
Agile Software Development, Principles, Patterns, and Practices (Robert C. Martin)
Clean Code (Robert C. Martin)
The Pragmatic Programmer (Andrew Hunt&David Thomas)
Well if you're here I would hope by now you have the basics down:
OOP Best practices
Design patterns
Application Security
Database Security/Queries/Schemas
Most notably developers should strive to learn multiple programming languages and disciplines, in order for their skill set to be expanded in more than one direction. They don't need to become experts in these other skills but at least have a very acute understanding of integration with their central discipline. This will make them much better developers in the long run, and also let them gain the ability to use all tools at their disposal to create applications that can transcend the limitations of a singular language.
Outside of programming specific topics, you should also learn how to work under Agile, XP, or other team based methodologies in order to be more successful while working in a team environment.
I think an advanced programmer should know how to get your employer to give you the time & money to acquire training on as many advanced programming topics that you can eat in a year. I'm not advanced yet. :)
I'd suggest an Artificial Intelligence class at a college/university. Most of the stuff is fun, easy to grasp (the basics at least), and the solutions to problems are usually creative.
Hitchhikers Guide to the Galaxy.
How would I prefer to acquire the training? I'd love to have a substantial amount of company time dedicated to self-training.
I totally agree on Accessabiitly. I was asked to look into it for the website at work and there is a real lack of good knowledge on the subject, not only a lack of CSS standards to aid in the likes of screen readers.
However my answer goes to GUI design - its quite a difficult thing to get right. There's too many awful applications out there that could be prevented just by taking the time to follow HCI (Human Computer Interaction) advice/designs. Take Google/Apple for inspiration when making a GUI - not your typical hundreds of buttons/labels combo that too often gets pushed out.
Automated testing: Unit testing, functional integration testing, non-functional testing
Compiler details (more relevant on some platforms than others): How does the compiler implement certain common constructs in language X? On a byte-code interpreted platform, how does JIT compilation work? What can be JIT-compiled (for example, can virtual calls be JIT compiled?)?
Basic web security
Common design idioms from other problem domains than the one you're working in at the moment.
I'd recommend learning about Refactoring, Test Driven Development, and various unit testing frameworks (NUnit, Visual Test, CppUnit, etc.) I'd also learn how to incorporate automated unit testing into your continuous integration builds.
Ultimately if you can prove your code does what it claims it can do, you don't have to be there to answer questions as to why or how. If a maintainer comes along and tries to "fix" your code, they'll know instantly if they broke it. Tests written around the requirements (use cases) explain to the maintainer what your users wanted it to do, and provide a little working example of how to call it. Think of unit tests as functional documentation.
Test Driven Development (TDD) is a more novel design approach that begins with the requirements, where you start by writing a test before you write the code. You then write exactly enough code required to pass the test. You have to stop before you write extra code (that you may never need), because you will refactor it later if you find that you really needed it.
What makes TDD cool is that a bad interface (such as one with lots of dependencies) is also very hard to write tests for. It's so hard that a coder would rather refactor the interface to make it easier to test. And that refactoring simplifies the code, removing inappropriate dependencies, or grouping related tests together to make it easier to test, thus improving cohesion. By making it immediately apparent to the developer when he's writing a badly interfaced module, the developer sticks to the architecture and gravitates to the principles of tight cohesion and loose coupling. Good interfaces are the natural result. And as a bonus, once you pass all your tests, you know you're done.
On the surface this seems like an easy question to answer, just enter your favorite pet peeve about what other developers can't do correctly. But when I read through the answers and gave it some thought, I realized that every "advanced topic" brought up was covered in my undergraduate computer science curriculum--20 years ago. And I doubt that OO, security, functional programming, etc. concepts have changed in that time. Sure the tools have, but I argue that tools are different than topics.
So what is an "advanced topic" in computer science? Who is the Turing, Knuth, Yourdon of the 21st century?
I don't have a clear answer to this question, though I'd like to see more work on theories for parallel programming that will enable tools to abstract that messy stuff for developers.
Quite funny that noone hasnt mentioned:
debugging.
tools & ide you work with
and platform you are developing to.
Everyday development is much more fun if you know your tools really well and you accomplish more and make your life easier if you know how to debug someone elses code at ease.
Source Control
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I wonder what would be a good definition of term "over-engineering" as applied to software development. The expression seems to be used a lot during software design discussions often in conjunction with "excessive future-proofing" and it would be nice to nail down a more precise definition.
Contrary to most answers, I do not believe that "presently unneeded functionality" is over-engineering; or it is the least problematic form.
Like you said, the worst kind of over-engineering is usually committed in the name of future-proofing and extensibility - and achieves the exact opposite:
Empty layers of abstraction that are at best unnecessary and at worst restrict you to a narrow, inefficient use of the underlying API.
Code littered with designated "extension points" such as protected methods or components acquired via abstract factories - which all turn out to be not quite what you actually need when you do have to extend the functionality.
Making everything configurable to "avoid hard-coding", with the effect that there's more (complex, failure-prone) application logic in configuration files than in source code.
Over-genericizing: instead of implementing the (technically uninteresting) functional spec, the developer builds a (technically interesting) "business rule engine" that "executes" the specs themselves as supplied by business users. The net result is an interpreter for a proprietary (scripting or domain-specific) language that is usually horribly designed, has no tool support and is so hard to use that no business user could ever work with it.
The truth is that the design that is most easily adapted to new and changing requirements (and is thus the most future-proof and extensible) is the design that is as simple as possible.
Contrary to popular belief, over-engineering is really a phenomena that appears when engineers get "hubris" and think they understand the user.
I made a simple diagram to illustrate this:
In the cases where we've considered things over engineered, it's always been describing software that has been designed to be so generic that it loses sight of the main task that it was initially designed to perform, and has therefore become not only hard to use, but fundimentally unintelligent.
To me, over-engineering is including anything that you don't need and that you don't know you're going to need. If you catch yourself saying that a feature might be nice if the requirements change in a certain way, then you might be over-engineering. Basically, over-engineering is violating YAGNI.
The agile answer to this question is: every piece of code that does not contribute to the requested functionality.
There is this discussion at Joel on Software that starts with,
creating extensive class hierarchies for an imagined future problem that does not yet exist, is a kind of over-engineering, and is therefore, bad.
And, gets into a discussion with examples.
If you spend so much time thinking about the possible ramifications of a problem that you end up interfering with the solving of the problem itself, you may be over-engineering.
There's a fine balance between "best engineering practices" and "real world applicability". At some point you have to decide that even though a particular solution may not be as "pure" from an engineering standpoint as it could be, it will do the job.
For example:
If you are designing a user management system for one-time use at a high school reunion, you probably don't need to add support for incredibly long names, or funky character sets. Setting a reasonable maximum length and doing some basic sanitizing should be sufficient. On the other hand, if you're creating a system that will be deployed for hundreds of similar events, you might want to spend some more time on the problem.
It's all about the appropriate level of effort for the task at hand.
I'm afraid that a precise definition is probably not possible as it's highly dependent on the context. For example, it's much easier to over-engineer a web site that displays glittering ponies than it is a nuclear power plant control system. Redundancies, excessive error checking, highly instrumented logging facilities are all over-engineering for a glittering ponies app, but not for a nuclear power plant control system. I think the best you can do is have a feeling about when you are applying too much overhead to your features for the purpose of the application.
Note that I would distinguish between gold-plating and over-engineering. In my mind, gold-plating is creating features that weren't asked for and will never be used. Over-engineering is more about how much "safety" you build into the application either by coding checks around the code or using excessive design for a simple task.
This relates to my controversial programming opinion of "The simplest approach is always the best approach".
Quoting from here: "...Implement things when you actually need them, never when you just foresee that you need them."
To me it is anything that would add any more fat to the code. Meat would be any code that will do the job according to the spec and fat would be any code that would bloat the code in a way that it just adds more complexity. The programmer might have been expecting a future expansion of the functionality; but still it is fat.!
My rough definition would be 'Providing functionality that isnt needed to meet the requirements spec'
I think they are the same as Gold plating and being hit by the Golden hammer :)
Basically, you can sit down and spent too much time trying to make a perfect design, without ever writing some code to check out how it works. Any agile method will tell you not to do all your design up-front, and to just create chunks of design, implement it, reiterate over it, re-design, go again, etc...
Over-engeneering means architecting and designing the applcation with more components than it really should have according to the requirements list.
There is a big difference between over-engeneering and creating an extensible applcaiton, that can be upgraded as reqirements change. If I can think of an example i'll edit the post.
Over-engineering is simply creating a product with greater functionality, quality, generality, extensibility, documentation, or any other aspect than is required.
Of course, you may have requirements outside a specific project -- for example, if you forsee doing future similar applications, then you might have additional requirements for extendability, dependent on cost, that you add on to the project specific requirements.
When your design actually makes things more complex instead of simplifying things, you’re overengineering.
More on this at:
http://www.codesimplicity.com/post/what-is-overengineering/
Disclaimer #1: I am a big-picture BA. I know no code. I read this site all the time. This is my first post.
Funny I was just told by my boss that I over-engineered a new software produce we're planning for mentoring (target market HR people). So I came here to look up the term.
They want to get something in place to sell now, re-purposing existing tools. I can't help but sit back and think, fewer signups, lower retention, if it doesn't allow some of the flexibility we talked about. And mainly, have a highly visual UI that a monkey could use.
He said we could plan future phases to improve the product, especially the UI. We have current customers waiting on "future improvements" that we still aren't doing. They need it though, truly need it.
I am in the process of resigning so I didn't push back.
But my definition would be.............making sure it only does as little as possible, for as cheap as possible, and still be passable for the thing you say it is. Beyond that is over engineering.
Disclaimer #2: This site helped me land my next job implementing a more configurable software.
I think the best answers to your question can be found in this other qestion
The beauty of Agile programming is that it's hard to over engineer if you do it right.
I'm looking for both an explanation of why and when you would use each system and what features differentiate a bug vs. issue tracking application.
Issue tracking systems usually integrate more with customers and customer issues. An issue could be "help me install this" or "How do I get the fubar into the flim flam." They could even be something like "I need an evalutation key for your software".
Bug tracking systems help you keep track of wrong or missing things from the program.
When looking at web systems, there is usually a big difference in focus, either helping customers or tracking problems with your software.
The difference could be clearer from the following example.
Suppose you had a production issue today that affected 5 customers, but was caused by a single software defect.
In your issue-tracking system, you opened 5 tickets and started tracking what each customer reported, what was communicated to them, when the software patch was applied, etc. You can track that kind of stuff separately for each customer.
In your bug-tracking system, you made 1 entry for the software defect started tracking things like steps to reproduce, code changes, etc.
Customer issues can be closed whenever they're remedied to the customer's satisfaction and that may or may not involve fixing the software. The bug can be closed when it's fixed and retested.
Two systems, outward- and inward-facing, tracking two different kinds of things, each with its own life cycle.
Bug tracking systems like Trac are designed to have one ticket for each problem intrinsic to the program, so a ticket is closed by modifying the program.
Customer support ticket systems like IssueTrackerProduct are designed to have one ticket for each customer experiencing a situation, so a ticket is closed by working out the situation for that customer (possibly by modifying the program).
For examples of each, see Wikipedia's Comparison of issue tracking systems
A bug is a subclass of issue. All bugs are issues, but not all issues are bugs.
Typically a bug is a defect in the codebase. This is different from an incomplete/yet-to-be implemented feature, or something more hard to pin down like a developer putting in a ticket to deal with a piece technical debt, or a concern with the UI. All of these are 'issues' semantically speaking.
A generic issue, when not falling under those other categories, is more often than not a representation of something reported by the end-user. In most systems, this reported issue is handled as a bug-report in itself. I'd venture to say this is a mistake.
The tricky part is that sometimes multiple issues may be related to other issues. It could be concerning the same bug, multiple bugs, or actually be a feature request. That is to say, there can be a many-to-many relationship between issues.
Why does the distinction matter? Well, there is a natural tree internally - Resolving one issue can indirectly complete (or contribute to completing) a million other issues. It also makes a difference in how an issue is resolved. Defects themselves may be resolved with a code change that fixes it, or makes it irrelevant. If it's a user complaint, it may be resolved by sending them a work around, and then left to be followed up on when the original defect is solved.
Features that work better at representing and working with these nuances in a useful way is really what to look for in a ticket tracking system.
At some point, you are talking about processes and methodologies more than actual ticketing systems, and the actual names of things should start to become irrelevant. Mainstream and enterprise oriented solutions tend to run on a popular systems like ITIL, but you can get away with adhoc stuff provided everyone on the team has a good understanding of customer service needs. I personally see it as a waterfall (ITIL) vs agile (DevOps) situation.
it's just semantics. A bug is a problem, an issue is something to do. They are otherwise much the same.
Its' a fuzzy line at best. Issue tracking system would probably be considered the more general of the two. In that all bug tracking systems are issue tracking systems, but not necessarily the opposite.
From our Friend Wikipedia
A bug tracking system is a software
application that is designed to help
quality assurance and programmers keep
track of reported software bugs in
their work. It may be regarded as a
sort of issue tracking system.
A bug is found in code
An issue can be found anywhere, in the processes, in hardware, in people.
It depends which development process you're adopting as to what the definitions mean.
I believe that a bug is something that can be fixed in code, while an issue is more of a problem with usability.
For example, a login form. A bug in the login form would be the form redirecting incorrectly after the login completes. While an issue would be that the overall login process is too slow, or there is no option to email a forgotten password.
This isn't really a complete answer to your question, but I've had similar questions come up with dealing with customers. I think at the highest level, a bug tracking system seems usually to be more developer focused. That is, developers are trying to track problems in the code. A function isn't returning the right value, more validation should be done, etc.
A good example of a system that integrates nicely with code is Trac.
Issue tracking systems seem to be more customer-centric. For example, being able to have a customer say "When I click on 'OK" I get an error". It may be user training, it may be a feature, or it may in fact be a bug.
So in many of the projects that I've worked on we keep these distinct. We have a high-level issue tracking system that may or may not result in an actual bug being created in the bug tracking system. However, many many bugs are tracked internally without any "issues" being created in the issue tracking system.
The problem that I see between these two is that it's really not very easy for inexperienced users to enter tickets into something like Trac because they get confused by the technical lingo. However, a high-level issue tracking system does not integrate tightly with code so it's useless to the developers.
Anyway... my $0.02.
Bugs: flaws anywhere within the process (application, database, reporting, etc.) that will prevent 100% of desired functionality from occurring. Also known and referred to as defects.
Issues: potentially caused by a bug or bugs, an issue is a report of some form of loss of functionality in the system that would be tied to a user. These are also referred to as help desk tickets in some organizations.
WIKIPEDIA LINKS
- Software Bug
- Issue Tracking
To answer this question it requires context and from the looks of it Alan's answer was to your context.
In the world of software testing, one of the distinctions we make between an issue and a bug are: bugs are anything that threatens the value of the product while issues are anything that threatens of the value of testing (or the value of the project and in particular the value of testing). Rapid Software testing teaches us that.
In my experience the tracking systems allow you to make whatever distinction you want between the two. How you use a particular tracking system is up to you.
I don't think there is a definitive answer, but I usually just think of Issue Tracking as merely a more generic term that corresponds to more than just "bugs". To only use the term "Bug Tracking" is kind of a pigeon-hole, which is associated with defects in software.
An issue tracker doesn't have to be tied to software though, and even BugZilla doesn't track only bugs, but also new enhancement / feature requests, votes, etc. In that way, I think of an "issue" as just a single item of interest that someone wants to get "done."
Lately there has also been a rise in Work Item Tracking (in e.g. Visual Studio and IBM/Rational Jazz), which is more lower level than "issues"--wherein an issue could be seen as requiring some N number of smaller work items to complete. At a higher level, you might also see something akin to a Milestone in BugZilla.
Bugs are specific to software developers. Issues are more general and can include all team member's progress on a project, including the graphic designers, system administrators, company executives, etc.
An issue tracker speaks in terms of things to do and can categorize an item as a bug if needed.
It is mostly just silly words, but I use an "issue tracker" as I work with many people who are not programmers, and we need to speak a common language by having a common productivity tool that makes us aware of what each other is doing.
You can use a bug tracker but it will just confuse non developers, especially if they have to think of their tasks as being a bug.
I would say it is also nice to draw a difference between a bug and an issue for programmers, as bugs are usually problems with existing code, and issues can be new feature requests.
Well... there is not difference besides the fact, that an issue is more than just a bug. It can be a task, a new feature, or simply an improvement. A bug is mostly seen as incorrect system behavior, while an issue has a broader definition. beyond just "it does not work"...
I took a glimpse on Hoare Logic in college. What we did was really simple. Most of what I did was proving the correctness of simple programs consisting of while loops, if statements, and sequence of instructions, but nothing more. These methods seem very useful!
Are formal methods used in industry widely?
Are these methods used to prove mission-critical software?
Well, Sir Tony Hoare joined Microsoft Research about 10 years ago, and one of the things he started was a formal verification of the Windows NT kernel. Indeed, this was one of the reasons for the long delay of Windows Vista: starting with Vista, large parts of the kernel are actually formally verified wrt. to certain properties like absence of deadlocks, absence of information leaks etc.
This is certainly not typical, but it is probably the single most important application of formal program verification, in terms of its impact (after all, almost every human being is in some way, shape or form affected by a computer running Windows).
This is a question close to my heart (I'm a researcher in Software Verification using formal logics), so you'll probably not be surprised when I say I think these techniques have a useful place, and are not yet used enough in the industry.
There are many levels of "formal methods", so I'll assume you mean those resting on a rigourous mathematical basis (as opposed to, say, following some 6-Sigma style process). Some types of formal methods have had great success - type systems being one example. Static analysis tools based on data flow analysis are also popular, model checking is almost ubiquitous in hardware design, and computational models like Pi-Calculus and CCS seem to be inspiring some real change in practical language design for concurrency. Termination analysis is one that's had a lot of press recently - The SDV project at Microsoft and work by Byron Cook are recent examples of research/practice crossover in formal methods.
Hoare Reasoning has not, so far, made great inroads in the industry - this is for more reasons than I can list, but I suspect is mostly around the complexity of writing then proving specifications for real programs (they tend to get big, and fail to express properties of many real world environments). Various sub-fields in this type of reasoning are now making big inroads into these problems - Separation Logic being one.
This is partially the nature of ongoing (hard) research. But I must confess that we, as theorists, have entirely failed to educate the industry on why our techniques are useful, to keep them relevant to industry needs, and to make them approachable to software developers. At some level, that's not our problem - we're researchers, often mathematicians, and practical usage is not foremost in our minds. Also, the techniques being developed are often too embryonic for use in large scale systems - we work on small programs, on simplified systems, get the math working, and move on. I don't much buy these excuses though - we should be more active in pushing our ideas, and getting a feedback loop between the industry and our work (one of the main reasons I went back to research).
It's probably a good idea for me to resurrect my weblog, and make some more posts on this stuff...
I cannot comment much on mission-critical software, although I know that the avionics industry uses a wide variety of techniques to validate software, including Hoare-style methods.
Formal methods have suffered because early advocates like Edsger Dijkstra insisted that they ought to be used everywhere. Neither the formalisms nor the software support were up to the job. More sensible advocates believe that these methods should be used on problems that are hard. They are not widely used in industry, but adoption is increasing. Probably the greatest inroads have been in the use of formal methods to check safety properties of software. Some of my favorite examples are the SPIN model checker and George Necula's proof-carrying code.
Moving away from practice and into research, Microsoft's Singularity operating-system project is about using formal methods to provide safety guarantees that ordinarily require hardware support. This in turn leads to faster performance and stronger guarantees. For example, in singularity they have proved that if a third-party device driver is allowed into the system (which means basic verification conditions have been proved), then it cannot possibly bring down that whole OS–he worst it can do is hose its own device.
Formal methods are not yet widely used in industry, but they are more widely used than they were 20 years ago, and 20 years from now they will be more widely used still. So you are future-proofed :-)
Yes, they are used, but not widely in all areas. There are more methods than just hoare logic, some are used more, some less, depending on suitability for given task. The common problem is that sofware is biiiiiiig and verifying that all of it is correct is still too hard a problem.
For example the theorem-prover (a software that aids humans in proving program correctness) ACL2 has been used to prove that a certain floating-point processing unit does not have a certain type of bug. It was a big task, so this technique is not too common.
Model checking, another kind of formal verification, is used rather widely nowadays, for example Microsoft provides a type of model checker in the driver development kit and it can be used to verify the driver for a set of common bugs. Model checkers are also often used in verifying hardware circuits.
Rigorous testing can be also thought of as formal verification - there are some formal specifications of which paths of program should be tested and so on.
"Are formal methods used in industry?"
Yes.
The assert statement in many programming languages is related to formal methods for verifying a program.
"Are formal methods used in industry widely ?"
No.
"Are these methods used to prove mission-critical software ?"
Sometimes. More often, they're used to prove that the software is secure. More formally, they're used to prove certain security-related assertions about the software.
There are two different approaches to formal methods in the industry.
One approach is to change the development process completely. The Z notation and the B method that were mentioned are in this first category. B was applied to the development of the driverless subway line 14 in Paris (if you get a chance, climb in the front wagon. It's not often that you get a chance to see the rails in front of you).
Another, more incremental, approach is to preserve the existing development and verification processes and to replace only one of the verification tasks at a time by a new method. This is very attractive but it means developing static analysis tools for exiting, used languages that are often not easy to analyse (because they were not designed to be).
If you go to (for instance)
http://dblp.uni-trier.de/db/indices/a-tree/d/Delmas:David.html
(sorry, only one hyperlink allowed for new users :( )
you will find instances of practical applications of formal methods to the verification of C programs (with static analyzers Astrée, Caveat, Fluctuat, Frama-C) and binary code (with tools from AbsInt GmbH).
By the way, since you mentioned Hoare Logic, in the above list of tools, only Caveat is based on Hoare logic (and Frama-C has a Hoare logic plug-in). The others rely on abstract interpretation, a different technique with a more automatic approach.
My area of expertise is the use of formal methods for static code analysis to show that software is free of run-time errors. This is implemented using a formal methods technique known "abstract interpretation". The technique essentially enables you to prove certain atributes of a s/w program. E.g. prove that a+b will not overflow or x/(x-y) will not result in a divide by zero. An example static analysis tool that uses this technique is Polyspace.
With respect to your question: "Are formal methods used in industry widely?" and "Are these methods used to prove mission-critical software?"
The answer is yes. This opinion is based on my experience and supporting the Polyspace tool for industries that rely on the use of embedded software to control safety critical systems such as electronic throttle in an automobile, braking system for a train, jet engine controller, drug delivery infusion pump, etc. These industries do indeed use these types of formal methods tools.
I don't believe all 100% of these industry segments are using these tools, but the use is increasing. My opinion is that the Aerospace and Automotive industries lead with the Medical Device industry quickly ramping up use.
Polyspace is a a (hideously expensive, but very good) commercial product based on program verification. It's fairly pragmatic, in that it scales up from 'enhanced unit testing that will probably find some bugs' to 'the next three years of your life will be spent showing these 10 files have zero defects'.
It is based more on negative verification ('this program won't corrupt your stack') instead positive verification ('this program will do precisely what these 50 pages of equations say it will').
To add to Jorg's answer, here's an interview with Tony Hoare. The tools Jorg's referring to, I think, are PREfast and PREfix. See here for more information.
Besides of other more procedural approaches, Hoare logic was in the basis of Design by Contract, introduced as an object oriented technique by Bertrand Meyer in Eiffel (see Meyer's article of 1992, page 4). While Design by Contract is not the same as formal verification methods (for one thing, DbC doesn't prove anything until the software is executed), in my opinion it provides a more practical use.
Over the years we have seen (well, I have :) a number of languages come and go. Some were more accepted, some a little less. So I was wondering, what do you think are factors which most impact whether the language survives ? And whether it will have a future for a number of years (by that I mean several decades or so) ?
For example, fortran and C have survived the test of time. They were popular though, but they also had very good corporate backing, financing, and standard specifications (ANSI and ISO).
Some of the modern languages I see today, although they are popular, have none of that (the current implementation is often considered standard). That is all fine for the time being, but what about 10 or 20 years later, when their authors are maybe not here anymore. I very rarely see open source languages which make the transition into corporate financing.
If you could put with a few words, in your opinion, what would be the most important factors for the survival of a language and why ?
Ruby is popular, although it has no corporate backing. And it has been here for 14 years already.
Perl already survived 22 years, and probably will survive a few more.
Python has no corporation backing (ok, don't know if you'd count Google's engagement), yet it made to Fortune 500 companies.
On the other hand:
Pascal got corporate backing and died.
Ada has corporation backing and it's practically reduced to DSL for avionics.
I think the answer depends a lot on the time-frame in which you define survival. This is important because I think there are three factors that have changed over time, and are still changing:
Hardware performance (i.e. speed or memory)
Hardware complexity (i.e. single-core v.s. multi-core)
Software complexity
I think the reason C has survived is because, until just the past few years, there was still a very real need for maximum performance in a lot of applications. Perhaps there will always be that kind of need, but I think it has been growing much less relevant in the past few years. I think it's always going to be around, but I'd be surprised if it was widely used 20 years from now; it's already started getting passed up in favor of C#/Java/etc in the past five years.
The recent (by which I mean past five years or so) rise of languages like Python are also a response to the fact that software has grown more complex, while performance has become less of an issue. Because consumers value the 'now', there's a huge incentive to develop quickly, and worry about speed later, if at all. That has a pretty big impact on which language you use for development.
I see clarity, maintainability and ease of use as the most important factor for survival, if you take the future out to 20+ years.
Every future language needs to make an existing problem easy
For example, concurrent programming is not easy on most languages today. This will be solved with a new language as we can not easily coax our existing paradigms into the parallel world. Just take a look at Java, which was built from the ground up with threads in mind, it has so many caveats with you even dare to do concurrent programming.
We'll need a system that makes it so easy to do concurrent programming that we won't even need to think about it. We'll need a memory model that protects us from having to think about these problems. For those who can't imagine such a world, you are just stuck in our current paradigm. We will need to change the way we develop software for this to work. Serious problems require change.
Another way for a language to survive is to attach it to an entire system. Just look at Objective C, it is Apple's language for all Apple products. I think this is the way to go. Design a system that is worthy of its own language.
There are many other examples, I've been thinking about this problem for a long time.
As far as I can recall, Fortran had no corporate backing until it was well established. C was backed by AT&T, but they really didn't care if anyone else adapted it. And both were well established before they had ANSI Standards (also, note the ANSI & ISO provide Standard specifications, not implementations)
On the other hand, IBM heavily back & promoted PL/I, and that never really caught on. And the US Government tried to get all of us writing Ada, and that didn't work either.
So, what does work? Good question. Getting schools to teach it is good (Pascal pretty much disappeared when colleges switched to C++ & Java). Lately have "buzz on the 'net" is good (cite: Java, Ruby)
In order for a language to survive it needs several things:
It needs to solve a problem better then other comparable options. This is the subjective aspect, that developers feel it is better and so they adopt it.
It needs to have good tooling. Without good tooling a language will never catch on to the masses.
It needs a strong community to be built around it. A community which provides assistance, help, components, etc etc...
I don't think corporate backing has a direct impact on these items. I think it can make things such as developing tooling more likely, but there are too many examples where it has helped or not helped adoption of a language.
Open source community has become more like a huge corporate, hasn't it?
Languages survive while they are used, and while people are prepared to maintain them. People are often prepared to maintain the language while it is used. If a language is not used, it dies.
There can be all sorts of things that contribute to, or determine whether, a language dies. Corporate-sponsored languages die if the corporate sponsor ceases to see a benefit (profit) in the language, or they want people to use an alternative, and the corporate sponsor is unwilling to release the code to open source, and there is no open source alternative.
I don't see evidence that corporate backing or standardization are sufficient to determine whether a language survives or not. There are many corporate backed languages that have failed to gain a strong foothold (ADA comes to mind). There are many standardized languages (Common Lisp) that also failed. On the other hand, there are plenty of non-standard non-corporate languages that gain popularity (Perl, PHP, Ruby). There doesn't seem to be causality there.
The viability of a language is really determined by the community around it. There is a positive feedback loop. More users means more support and more libraries which in turn means more users. Popular languages can languish, but they don't totally die out. Not for a long time.
If I were looking for a language to use for something that had to last, the two biggest criteria in my mind would be:
Does it work well for my problem domain?
Is the community strong enough to be self-perpetuating?
If the answers to those two questions are true, use the language. If either answer is false, don't.
While other languages have been almost killed by their corporate backing = Delphi