What is the term for noddy code to test a principle - terminology

Apologies if this is a bad question for this forum!
A colleague thinks there is a short single-word term for a throwaway program written to verify that an algorithm or technique works as expected but can't remember what it is, and on-one else in the office has any idea.

Spike tends to be the term used in agile methodology to describe:
A spike solution is a very simple program to explore potential
solutions. Build the spike to only addresses the problem under
examination and ignore all other concerns. Most spikes are not good
enough to keep, so expect to throw it away.
Some related terms might be
proof of concept
MVP (minimum viable product)

Related

How large a role does subjectiveness play in programming?

I often read about the importance of readability and maintainability. Or, I read very strong opinions about which syntax features are bad or good. Or discussions about the values of certain paradigms, like OOP.
Aside from that, this same question floats about in my mind whenever I read debates on SO or Meta about subjective questions. Or read questions about best practices and sometimes find myself or others disagreeing.
What role does subjectiveness play within the programming realm?
Sometimes I think it plays a large role. Software developers are engineers in a way, but also people. A large part of programming is dealing with code that's human readable. This is very different from Math or Physics or other disciplines with very exact and structured rules. Here the exact structure and rules are largely up in the air, changeable on a whim, and hence the amount of languages in existence. And one person may find one language very readable, and another person may find their own language the most comforting.
The same with practices. One person may not like certain accepted practices. I myself find splitting classes into different files very unreadable, for instance.
But, I can't say rules haven't helped in general. Certain practices have and do make life easier. And new languages have given rise to syntax and structure that make life easier. There's certainly been a progression towards code that is easier to read and maintain even given a largely diverse group of people. So maybe these things aren't as subjective as I thought.
It reminds me, in a way, of UI design. Certainly it's subjective, but then there's an entire discipline involved in crafting good UI and it tends to work.
Is there something non-subjective about the ideas behind maintainability, readability, and other best practices? Is there something tangible to grasp when one develops a new language or thinks of new practices?
Arguably your question is really about the distinction between programming, which is mathematical, algorithmic and scientific, and software engineering, which is subjective, variable and human-focused.
Great programmers are not necessarily great software engineers, and vice versa. The two skillsets, while not exclusive by any means, have less overlap than they appear at first. Their relative importance depends a lot on the project: a brilliant programmer working alone can turn out amazing examples of technical genius, and it doesn't matter that nobody else can understand or maintain it, because he's not going to share the code anyway. But move into an enterprise environment -- like corporate in-house software development -- and I'll gladly trade you ten "cave troll" geniuses for a mediocre programmer who understands the importance of readability and documentation.
It's been my experience that the world needs great software engineers more than it needs great programmers. Relatively few people in this day and age are writing software which is truly performance-critical (OS kernels, compilers, graphics engines, realtime embedded systems, etc), and the Internet allows mediocre programmers to quickly grab algorithmic solutions for problems they couldn't solve alone. But nearly everyone writing professional code has to work within a team. And team productivity rises and falls dramatically on the ability of its members to communicate effectively and distribute workload efficiently, two skills which are highly subjective and impossible to prove by rigid formula.
Most software engineering principles are built on experience rather than objective law. Much like the social sciences, we study, learn, adapt and apply -- but with no real guarantees of outcome. All we can say is that some things seem to work better than others in most groups.
I think, a lot of it is necessarily determined by how much our mind is able to process at one time. So it comes down to how much the language and tools enable a team or a developer to break down the problem into chunks that are meaningful by themselves, but not so large that it becomes too hard to grasp them. The common theme is the art of organizing information (in this case, the code, the logic, ...) But that's not so different from Maths or Physics, by the way.
Just as the best authors borrow from many styles, the best programmers keep a huge range of patterns in their mental arsenal. Slavishly following a few patterns and adhering to some absolute truth is both lazy and dangerous.
Put it another way, the day we rely on robots for code review is the day I quit.
It all depends on your point of view :-)
But to answer your questions, I think one way to view subjectivity is to recognize that software languages, tools, and best practices are a shared means of communication among individuals. Yes, a programming language is a formal way of instructing a computer how to behave, but a programming language may also be viewed as a way to define and communicate specifications to a high level of detail (the code is the ultimate spec, is it not?).
So as far as we may want to concern ourselves with the degree of subjectivity in software languages, tools, and best practices, I would say that the lack of subjectivity may indicate how well communication is facilitated.
Yes, individuals have certain proclivities that are expressed in their habits and tendencies, but that should not ultimately matter too much in the perfect platform for development.
Turning to my Maths PhD wife I asked if there's any subjectivity in mathematics. Her answer is yes there is, mainly in the way we as humans achieve the answer.
If a mathematical proof is the result, how you get to that result can vary. If the dataset is large you may need to use a computer, which can introduce errors, and thus debated about whether that is the right approach. Or sometimes mathematicians can disagree on the theory - one is trying to prove that x is true while the other is trying to prove that x is false.
I think the same thing exists in computer science. A correct answer is a program that runs correctly, but that definition of correct may be different for each project. Sometimes correct means no bugs. Sometimes it means running efficiently.
From here programmers can argue how best to achieve the "correct" result. A good example of this is is the FizzBuzz application. A simple answer would be just a for loop, but Enterprise FizzBuzz is also "correct" in that it produces the correct answer, but is generally laughed at as "bad" engineering due to its overcomplication of the idea (it was a joke app after all).
How large a role does subjectiveness play in programming? I'd say it's a very large part of what we do, simply because we are human, and because there are multiple ways of getting the "correct" answer so there is disagreement over which way is the best.
Studies have been done showing that certain practices reduce defect rates in software. For instance, a study found a strong correlation between cyclomatic complexity and the probability of being fault-prone. Other studies show the average effectiveness of design and code inspections are 55 and 60 percent. So it appears to be in our best interests to favor simplicity, check metrics, and do code reviews.
We're talking probabilities here, though. If I review your code, I'm not guaranteed to find 60% of your bugs. There are also few absolutes in software development; experienced developers know that the correct answer is generally "it depends." That said, there are a number of practices with objective data in their favor.

Is TDD overkill for small projects?

I have been reading quite a bit recently about TDD and such and I'm not quite sold on it just yet.. I make a lot of small hobby projects(just me) and I'm concerned if trying to do TDD is overkill for such a thing. Though I have seen small open source projects with like 3 developers that do TDD. (though I have seen a few one-person projects that also do TDD)
So is TDD always a good thing to do or at what threshold does it make sense to use?
Small Projects can have a habit of turning into big projects without you realizing, then you wish you'd started with TDD :)
TDD shines in small projects. It's often much easier to adhere to TDD in a small project, and it's a great time to practice and get the discipline required to follow TDD.
In my experience larger projects tend to be the ones that abandon TDD at some threshold. (I'm not suggesting this is a good thing).
I think larger projects tend to abandon it for a couple of reasons:
Developer inexperience --- either in general or with TDD
Time Constraints --- Larger projects are inherently more complex
Added complexity leads to deadline overruns and unit tests tend to get ditched first
This can be exacerbated by an inexperienced team
From my personal experience I can say the following:
Every single time I started one of my personal little hobby projects, I vowed to develop it using TDD.
Every single time I didn't.
Every single time I regretted it.
Everything has a cost-benefit curve.
Ignoring many of the oft' disputed benefits of TDD, TDD is worth it if your implementation will change sufficiently often that the benefit of having an automated test suite outweighs whatever extra cost might be involved in initial development.
I always find a question like this funny. What is the alternative? Writing code that you only compile and never run to verify its correctness? Do you wait until you deploy to production to find out if your class works at all?
If we never had a practice called TDD, or before JUnit was invented back in 1997, was there no code testing? Of course there was. So why is it such a big deal now that you have testing frameworks to help you with this?
Even a small project on a tight deadline won't want to wait until production to find out if it works. Write the tests.
It's not necessary for any project that's "small", but I define "small" as less than one class.
Overkill? Not at all. In addition to the main benefit, which is writing code you can rely on because you've thought about ways it can break, you'll be more disciplined and potentially more productive with test driven development. Pick up any of the Pragmatic Programmer books for tips and inspiration.
I would take the opportunity of using TDD with a small project just to get your feet wet. It would be a good learning experience even if you realize it's not for you.
Something I've heard from one member of our local Agile group is that she doesn't find TDD useful for the very earliest stages of the project, where you're essentially making quick sketches and you're not really sure what shape the thing is taking yet. But as soon as you have some ideas of what the interfaces look like, you can start using tests to help you define them.
TDD is another tool, like documentation, to improve the clarity of the code. This is critical when other people need to work with your code, but many of us find it's also very helpful when looking back at our own code. Ever had a hobby project you picked back up after being away from it for a while, came across a weird bit of code, and wondered "why the heck did I write that?"
I and others use TDD on any project that is more than say a few lines of code.
Once you get the testing bug, it's hard not to use TDD for anything. Personally I've found my code has improved several times over due to TDD.
I believe it's worth it for most any project. TDD and the consequent regression testing enables you not only to determine if components work as you write them, but as you introduce changes and refactorings. Provided your tests are sufficiently complete, you can cover scenarios for infrequent/unlikely edge cases and produce/maintain more reliable code.
Going forwards through the project lifecycle, the continuous testing cycles will save you the manual repetition of tests, and negate the obvious chance of repeating these incorrectly/incompletely.
Well, it is not really the amount of people that is the deciding factor for TDD (at least this is what your question kind of infers), but much rather the size of the project.
Advantages of TDD are, that all the code you are developing will pretty much be unit-tested. That way you save a lot of hassle when refactoring later. Of course this is really just necessary when your project has a decent size.
In my experience 90% of the time those who are dubious about the benefits have not tried it.
Try it, with a skeptical mind. Measure what you are hoping to gain from it, before and after.
I can point to way less time spent/wasted fixing bugs found in production. I see/measure better productivity (faster time to market), improvements in code quality (across a variety of metrics), closer match to requirements (ie less rework because the requirements were not clear), etc.
I "feel" better about projects using TDD, but then I am "test-infected". Developer morale on projects using TDD is generally higher, as a subjective opinion.
If you don't get those results, don't use it. If you don't care enough about those results to measure them, then use TDD or not as makes you feel better.
TDD has a learning curve. If you are not willing to put the effort in to give it a serious attempt, don't bother.
A small project is great way to give it a serious try without risking much.
When you think that consequential errors that you don't expect might happen as a result of your writing code, TDD makes sense.
I'd say it totally depends on the given time frame. If you've got the time to spend almost twice the time you'd usually require, then go for it.
But in my opinion speed is these days one of the most important factors (for competitive companies).
A project developed with good OO code is inherently well-suited for testing, and arguably can acquire a test-driven focus later in the development style. I'd actually say that when you're waterfalling on emerging technologies with a limited budget, considering all that is TDD is completely optional.
I think TDD is worth it no matter the size (EVEN if it's one class - since writing the tests first can help you come up with a more sane design).
The place that I feel it may not be necessary is when you are building a project in which you aren't sure what you want, and you are unlikely to care about maintainability. I find that there aren't ANY projects that fit that category at work, but I have found that occasionally I am developing personal projects in which this is the case. In these cases, I am usually learning a new framework and have no idea what i'm doing from the beginning, so my tests would be more likely to break over time for the wrong reasons, thus decreasing their value.
However, I am also acknowledging that not using TDD is costing me maintainability - once I know what i'm doing, I promptly fall back to red/green/refactor.
Summary
You have a lot of answers above. Your question is years-old, but allow me to chime in: Yes—do TDD! Test your code! Be smart about it.
Design-by-Contract
TDD and BDD are best understood in the context of Hoare-logic preconditions and post-conditions (as well as other forms of code-correctness Boolean assertions). The best application of it I have ever used is Eiffel in EiffelStudio.
The Code-Fail-Correct model is okay until one starts to measure how much time developers and QA people spend on correcting bugs.
You can also go hugely wrong with TDD and BDD as well. TDD can end up generating massive code-bloat, where your test code is larger and harder to maintain than your production code. BDD—which is really mostly DbC—can be misunderstood, misapplied, and mismanaged with its own complexities, bloat, and cost-overhead as well.
The Need
The deepest need is for a language specification, compiler, IDE, and testing system where TDD + BDD (aka DbC) is baked in, with all the proper parts in their proper place instead of bolt-on Frankenstein nonsense trying to masquerade as TDD + BDD.
I find it humorous to watch programmers twisting in the wind of trying to shoehorn common implementations of TDD and BDD into mainstream languages that have no sense of Design-by-Contract at all. Everyone interprets TDD + BDD through this language-spec/compiler/IDE lens as though they truly "get" what it is. They never actually see just how silly and distorted it is.
In From the Cold
TDD + BDD (DbC) get distorted just like other technologies and topics. For example: Do not attempt to use Java as a lens for understand Object Oriented Theory. The same is true for C++ or other C-derived languages. Trying to use a language as a means to learn OO is like thinking that knowing your calculator is going to cause you to understand calculus.
The only language specification, compiler, IDE, and testing system I am aware of that is built from a theory understanding of TDD and BDD is Eiffel and EiffelStudio. I have been using it for some 20 years. I've been around this block many times. It frustrates me to see you all suffering and twisting about on subjects that (to me) are as clear as a cloudless summer day in springtime.

What are some advanced software development topics every developer should know? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Let's say your company has given you the time & money to acquire training on as many advanced programming topics that you can eat in a year, carte blanche. What would those topics be and how would you prefer to acquire them?
Assumptions:
You're still having deliverables to bring into existence, but you're allowed one week per month for the year for this training.
The training can come from anywhere. IE: Classroom, on-site instructor, books, subscriptions, podcasts, etc.
Subject matter can cover any platform, technology, language, DBMS, toolset, etc.
Concurrent/Parallel programming and multi-threading, especially with respect to memory models and memory coherency.. I think every programmer should be aware of the considerations in this arena as we move into a world of multi-core/multi-cpu hardware.
For this I would probably using Internet research most heavily; but an on-campus primer at a good university could be a good way to start off.
Security!
Far too many programmers just build something and think they can add security as an afterthought after finishing the "main" part of the program. You could always benefit from knowing more about how to secure your app, how to design software to be secure from the get-go, how to do intrusion detection, etc.
Advanced Database Development
Things like data warehousing (MDX, OLAP queries, star schemas, fact tables, etc), advanced performance tuning, advanced schema and query patterns, and the like are always useful.
Here are the three that I'm always finding myself explaining to junior developers who didn't get enough CS training. All that other stuff is generally more hype than substance, or can be fairly easily picked up. But if you don't know these three, you can do a great deal of damage:
Algorithm analysis, including Big O
Notation.
The various levels of
cohesion and coupling.
Amdahl's Law, and how it pertains to optimizations.
Internationalization issues, especially since it sounds like it would not be an advanced topic. But it is.
Accessibility
It's ignored by so many organizations but the simple fact of the matter is that there are a huge number of people with low or no vision, color blindness, or other differences that can make navigating the web a very frustrating experience. If everybody had at least a little bit of training in it, we might get some web based UIs that are a little more inclusive.
Object oriented design patterns.
I guess "advanced" is different for everyone, but I'd suggest the following as being things that most decent developers (i.e. ones that don't need to be told about NP-completeness or design patterns) could gain from:
Multithreading techniques that go
beyond "lock" and when to apply them.
In-depth training to learn and
habitualize themselves with clever
features in their toolchain (IDE/text
editor, debugger, profiler, shell.)
Some cryptography theory and hands-on experience with different common flaws in security schemes that people create.
If they program against a database, learn the internals of their database and advanced
query composition and tuning techniques.
Developers should know the basics in SQL development and how their decisions impact database performance. It is one thing to write a query it is another thing to write a query, understand the explain plan and make design decisions based off that output. I think a good course on PL/SQL development and database performance would be very beneficial.
Unfortunately communication skills seem to fall under the "advanced topics" section for most developers (present company excluded, of course).
Best way to acquire this skill: practice.
Take of the headphones, and talk to
someone instead of IM'ing or emailing
the guy at the next desk.
Pick up the phone and talk to a
client instead of lobbing an email
over the fence.
Ask questions at a conference instead of sitting behind your laptop
screen twittering.
Actively participate in a non-technical meeting at work.
Present something in public.
Most projects do not fail because of technical reasons. They fail because they could not create a team. Communication is vital to team dynamics.
It will not harm your career either.
One of the best courses I took was a technical writing course. It has served me well in my career.
Additionally: it probably does not matter WHAT the topic is - the fact that the organization is interested in it and is paying for it and the developers want to go and do go, is a better indicator of success/improvement than any one particular topic.
I also don't think it matters that much what the topic is. Dev organizations deal with so many things during a project that training and then on the job implementation/trial and error will always get you some better perspective - even if the attempts to try out/use the new stuff fail. That experience will probably help more on the subsequent projects.
I'm a book person, so I wouldn't really bother with instruction.
Not necessarily in this order, and depending on what you know already
OO Programming
Functional Programming
Data structures and algorithms
Parallel processing
Set based logic (essentially the theory behind sql and how to apply it)
Building parsers (I only put this, because it actually came up where I work)
Software development methodolgoies
NP Completeness. Specifically, how to detect if a problem is NP-Complete, and how to build an approximate solution to the problem.
I see this as important because you don't want a developer to try and solve an NP-complete problem by getting the optimum solution, unless the problem's search space is very small, in which case brute force is acceptable. However, as the search space increases, the time required to solve the problem increases exponentially.
I'd cover new technologies and trends. Some of the new technologies I'm researching/enhancing my skills with include:
Microsoft .NET Framework v3.0/v3.5/v4.0
Cloud Computing Frameworks (Amazon EC2, Windows Azure Services, GoGrid, etc.)
Design Patterns
I am from MS based developer world, so here is my take on this
More about new concepts in Cloud Computing (various API etc.). as the industry is betting on it for sometime.
More about LinQ for .net framework
Distributed databases
Refactoring techniques (which implies also learning to write a good set of unit/functional tests).
Knowing how to refactor is the best way to keep code clean -- it is rare when you get it right the first time (especially in new designs).
A number of refactorings, however, require a decent set of tests to check that the refactoring did not add unexpected behavior.
Parallel computing- the easiest and best way to learn it
Debugging
Debugging by David J. Agans is a good book on the topic. Debugging can be very complex when you deal with multi threaded programs, crashes, algorithms that doesn't work. etc. Everybody would be better off being good at debugging.
I'd vote for real-world battle stories. Have developers from other organizations present their successes and failures. Don't limit the presentations to technologies you're using. With a significantly complex project, this is bound to cut into 'advanced' topics you haven't even considered. Real-world successes (and failures) have a lot to teach.
Go to the Stack Overflow DevDays
and the ACCU conferences
Read
Agile Software Development, Principles, Patterns, and Practices (Robert C. Martin)
Clean Code (Robert C. Martin)
The Pragmatic Programmer (Andrew Hunt&David Thomas)
Well if you're here I would hope by now you have the basics down:
OOP Best practices
Design patterns
Application Security
Database Security/Queries/Schemas
Most notably developers should strive to learn multiple programming languages and disciplines, in order for their skill set to be expanded in more than one direction. They don't need to become experts in these other skills but at least have a very acute understanding of integration with their central discipline. This will make them much better developers in the long run, and also let them gain the ability to use all tools at their disposal to create applications that can transcend the limitations of a singular language.
Outside of programming specific topics, you should also learn how to work under Agile, XP, or other team based methodologies in order to be more successful while working in a team environment.
I think an advanced programmer should know how to get your employer to give you the time & money to acquire training on as many advanced programming topics that you can eat in a year. I'm not advanced yet. :)
I'd suggest an Artificial Intelligence class at a college/university. Most of the stuff is fun, easy to grasp (the basics at least), and the solutions to problems are usually creative.
Hitchhikers Guide to the Galaxy.
How would I prefer to acquire the training? I'd love to have a substantial amount of company time dedicated to self-training.
I totally agree on Accessabiitly. I was asked to look into it for the website at work and there is a real lack of good knowledge on the subject, not only a lack of CSS standards to aid in the likes of screen readers.
However my answer goes to GUI design - its quite a difficult thing to get right. There's too many awful applications out there that could be prevented just by taking the time to follow HCI (Human Computer Interaction) advice/designs. Take Google/Apple for inspiration when making a GUI - not your typical hundreds of buttons/labels combo that too often gets pushed out.
Automated testing: Unit testing, functional integration testing, non-functional testing
Compiler details (more relevant on some platforms than others): How does the compiler implement certain common constructs in language X? On a byte-code interpreted platform, how does JIT compilation work? What can be JIT-compiled (for example, can virtual calls be JIT compiled?)?
Basic web security
Common design idioms from other problem domains than the one you're working in at the moment.
I'd recommend learning about Refactoring, Test Driven Development, and various unit testing frameworks (NUnit, Visual Test, CppUnit, etc.) I'd also learn how to incorporate automated unit testing into your continuous integration builds.
Ultimately if you can prove your code does what it claims it can do, you don't have to be there to answer questions as to why or how. If a maintainer comes along and tries to "fix" your code, they'll know instantly if they broke it. Tests written around the requirements (use cases) explain to the maintainer what your users wanted it to do, and provide a little working example of how to call it. Think of unit tests as functional documentation.
Test Driven Development (TDD) is a more novel design approach that begins with the requirements, where you start by writing a test before you write the code. You then write exactly enough code required to pass the test. You have to stop before you write extra code (that you may never need), because you will refactor it later if you find that you really needed it.
What makes TDD cool is that a bad interface (such as one with lots of dependencies) is also very hard to write tests for. It's so hard that a coder would rather refactor the interface to make it easier to test. And that refactoring simplifies the code, removing inappropriate dependencies, or grouping related tests together to make it easier to test, thus improving cohesion. By making it immediately apparent to the developer when he's writing a badly interfaced module, the developer sticks to the architecture and gravitates to the principles of tight cohesion and loose coupling. Good interfaces are the natural result. And as a bonus, once you pass all your tests, you know you're done.
On the surface this seems like an easy question to answer, just enter your favorite pet peeve about what other developers can't do correctly. But when I read through the answers and gave it some thought, I realized that every "advanced topic" brought up was covered in my undergraduate computer science curriculum--20 years ago. And I doubt that OO, security, functional programming, etc. concepts have changed in that time. Sure the tools have, but I argue that tools are different than topics.
So what is an "advanced topic" in computer science? Who is the Turing, Knuth, Yourdon of the 21st century?
I don't have a clear answer to this question, though I'd like to see more work on theories for parallel programming that will enable tools to abstract that messy stuff for developers.
Quite funny that noone hasnt mentioned:
debugging.
tools & ide you work with
and platform you are developing to.
Everyday development is much more fun if you know your tools really well and you accomplish more and make your life easier if you know how to debug someone elses code at ease.
Source Control

Consequences of doing "good enough" software

Does doing "good enough" software take anything from you being a programmer?
Here are my thoughts on this:
Well Joel Spolsky from JoelOnSoftware says that programmers gets bored because they do "good enough" (software that satisfies the requirements even though they are not that optimized). I agree, because people like to do things that are right all the way. On one side of the spectra, I want to go as far as:
Optimizing software in such a way as I can apply all my knowledge in Math and Computer Science I acquired in college as much as possible.
Do all of the possible software development process say: get specs from a repository, generate the code, build, test, deploy complete with manuals in a single automated build step.
On the other hand, a trait to us human is that we like variety. In order to us to maintain attraction (love programming), we need to jump from one project or technology to the other in order for us to not get bored and have "fun".
I would like your opinion if there is any good or bad side effects in doing "good enough" software to you as a programmer or human being?
I actually consider good-enough programmers to be better than the blue-sky-make-sure-everything-is-perfect variety.
That's because, although I'm a coder, I'm also a businessman and realize that programs are not for the satisfaction of programmers, they're to meet a specific business need.
I actually had an argument in another question regarding the best way to detect a won tic-tac-toe/noughts-and-crosses game (an interview question).
The best solution that I'd received was from a candidate that simply checked all 8 possibilities with if statements. There were some that gave a generalized solution which, while workable, were totally unnecessary since the specs were quite clear it was for a 3x3 board only.
Many people thought I was being too restrictive and the "winning" solution was rubbish but my opinion is that it's not the job of a programmer to write perfect beautifully-extendable software. It's their job to meet a business need.
If that business need allows them the freedom to do more than necessary, that's fine, but most software and fixes are delivered under time and cost constraints. Programmers (or any profession) don't work in a vacuum.
As a programmer I want to write excellent software that's defect-free. I'm not particularly interested in gold-plating, the act of adding unnecessary features that "improve" the software, though we all do it to a certain extent. In that sense, I'm satisfied with "good enough" software, if by good enough you mean that I've done what the customer asked and, at the same time, crafted it well and ensured that it is high quality.
What bothers me is when I take short-cuts and write crappy, untested code. I hate writing code that is buggy or where I've failed to refactor it into a better design as I've gone along. when I let a lot of technical debt creep in -- getting too busy writing new features instead of consistently improving old features as I'm adding new ones -- then I know that eventually I'll have something that, while the customer may be happy with it, I won't be.
Fortunately, in my workplace, management knows the value of keeping the code clean and I know the value of not obsessing over the elusive goal of perfection. No code is ever perfect, but "good enough" has to mean that the code is well-crafted. I've learned, and am still learning, to be happy with code that meets the customer's requirements and that the best feature is the one that doesn't need to be implemented. Fortunately, I have enough work to do that dropping features because they're not needed is a good thing.
In my experience, "good enough" always includes hacks, sloppiness, bad commenting, and spaghetti hell, thus leads to lack of scalability, bugs, lagginess, and prevents others from being able to build effectively on your work.
Pax, while I recognize your points about business needs and pragmatism, doing things "by the book" is for the business side. "Good enough for now" and "just get something working right quick" always leads to far more work-hours later on fixing everything, or downright redoing it when it comes to that, than would be spent doing it right the first time. "The book" was written for a reason.
IMO there is a big difference between "good enough" and crappy code. For me "good enough" is all about satisfying the requirements (both functional and non functional). I think it is dangerous for people to assume that "good enough" means taking short cuts or not optimzing code. If the non functional requirements call for optimized code then that is part of my definition of "good enough".
The key to your question is how one defines "good". To a business person, "good" software is software that solves the business need. In that case it is more about insuring that the specifications were well understood and properly implimented. The business person may very well not care if the program is not as fast or memory efficient as it could be.
Think about the commercial software you use, is it perfect? I really don't know anyone, including my friends at Microsoft, who would argue that the code in Windows is "perfect" or anything close to it. But it is undenyable that Windows is (and always has been) "good enough" to get millions of people to use it on a daily basis.
This issue goes back long before programming. I'm sure you have heard "If it ain't broke, don't fix it" or the original in French "Le mieux est l'ennemi du bien." It may have been Voltaire that wrote about the "good being the enemy of the great".
ANd consider what would happen if hiring managers decided to stop hiring "good" programmers and insisted that every applicant had a perfect 4.0 average in college, I for one would never have gotten a job as a programmer ;-)
So for me it is a case of do the best you can given the time and budget constraints. With more time and or more money I could always do better.
"Good enough" is in the eye of the beholder. Far too often, "good enough" is the refuge of incompetent people who write something which creates the impression of satisfying the requirements of a job. My "good enough" is unlikely to be the same as their "good enough".
Ultimately, everything we do must involve trade-offs. Some people will make the wrong trade-offs and deliver crappy software and some people will make the wrong trade-offs and fail to deliver. Rare are the ones who can make the right trade-offs and deliver software that really is good enough.
There are at least two aspects of quality that we have to take into account:
software quality: does the software meet the desired goals/requirements? do we deliver builds which have critical bugs? is it easy for end users to operate?
code quality: how hard is it to maintain the code? is it easy to implement new features?
If you're building a productized software, I think it is good to assume that it's never good enough in both aspects. Every little feature counts and if the users will not find what they need or the product is not stable enough, they will take a look at the competition. You also want to implement new features as quickly as possible, so that you have a competitive advantage in the market.
The situation gets interesting if you're building custom business software, where the end users and decision makers are usually not the same people, then the features/quality/money trade off becomes part of the negotiation process. What we usually do is we put "good enough" constraint on these three aspects: we have a set of requirements to meet, a quality to maintain and usually not enough time to keep both.
What is usually forgotten in this process is the second point: code quality or maintainability. We, programmers understand that sooner or latter crappy code will take its revenge and result in critical bugs or maintance costs. Decision makers don't. The problem is, the responsibility and risks are taken by you (your company, your division etc.) and you will be first to blame if something goes wrong.
My opinion is: for software quality do what the client tells you to do, they know best which features are critical for them, how many buggy the software can be etc. For code quality and maintainability: do as best as you can, learn to do more and teach others to do the same. This is where I get the fun from.
Depends what you mean by "good enough". I can see some risk at the design level, if you make it good enough you may find maintaining and extending your applicataion painful.
I think of programming as an Art. An art that requires efficiency. Is efficient code incompatible with beautiful code ? I doubt that. In fact, i think that when you solve a problem creatively it may mean multiplied performance. I don't think that programming should only be about learning a new libraries for each new needs, nor about bug tracking and fixing. I think it should be about beauty. Of course code cannot be always art, and sometimes one should be pragmatic about the encountered problems.

Is solving the halting problem easier than people think? [duplicate]

This question already has answers here:
What exactly is the halting problem?
(24 answers)
Is there a "good enough" solution for the halting problem?
(6 answers)
Closed 15 days ago.
Although the general case is undecidable, many people still do solve problems that are equivilent well enough for day to day use.
In cohen's phd thesis on computer viruses, he showed how virus scanning is equivilent to the halting problem, yet we have an entire industry based around this challenge.
I also have seen microsoft's terminator project - http://research.microsoft.com/Terminator/
Which leads me to ask - is the halting problem overrated - do we need to worry about the general case?
Will types become turing complete over time - dependant types do seem like a good development?
Or, to look the other way, will we begin to use non turing complete languages to gain the benefits of static analysis ?
Is solving the halting problem easier than people think?
I think it is exactly as difficult as people think.
Will types become turing complete over time?
My dear, they already are!
dependant types do seem like a good development?
Very much so.
I think there could be a growth in non-Turing complete-but-provable languages. For quite some time, SQL was in this category (it isn't any more), but this didn't really diminish its utility. There is certainly a place for such systems, I think.
First: The Halting Problem is not a "problem" in a practical sense, as in "a problem that needs to be solved." It is rather a statement about the nature of mathematics, analogous to Gödel's Incompleteness Theorem.
Second: The fact that building a perfect virus scanner is intractable (due to its being equivalent to the Halting Problem) is precisely the reason that there is "an entire industry built around this challenge." If an algorithm for perfect virus scanning could be designed, it would simply be a matter of someone doing it once, and then there's no need for an industry any more. Story over.
Third: Working in a Turing Complete language does not eliminate "the benefits of static analysis"-- it merely means that there are limits to the static analysis. That's ok-- there are limits to almost everything we do, anyway.
Finally: If the Halting Problem could be "solved" in any way, it would definitely be "easier than people think", as Turing demonstrated that it is unsolvable. The general case is the only relevant case, from a mathematical standpoint. Specific cases are matters of engineering.
There are plenty of programs for which the halting problem can be solved and plenty of those programs are useful.
If you had a compiler that would tell you "Halts", "Doesn't halt", or "Don't know" then it could tell you which part of the program caused the "Halt" or "Don't know" condition. If you really wanted a program that definitely halted or didn't halt then you'd fix those "don't know" units in much the same way we get rid of compiler warnings. I think we would all be surprised at how often trying to solve this generally-impossible problem proved useful.
As a day-to-day programmer, I'd say it's worthwhile to continue as far down the path to solving halting-style problems, even if you only approach that limit and never reach it. As you pointed out, virus scanning proves valuable. Google search doesn't pretend to be the absolute answer to "find me the best X for Y," but it's also notably useful. If I unleash a novel virus (muwahaha), does that create a bigger solution set, or just cast light on an existing problem area? Regardless of the technical difference, some will pragmatically develop and charge for follow-up "detection and removal" services.
I look forward to real scientific answers for your other questions...
The Halting Problem is really only interesting if you look at it in the general case, since if the Halting problem were decidable, all other undecidable problems would also be decidable via reduction.
So, my opinion on this question is, no, it is not easy in the cases that matter. That said, in the real world, it may not be such a big deal.
See also: http://en.wikipedia.org/wiki/Halting_problem#Importance_and_consequences
Incidentally, I think that the Turing completeness of templates shows that halting is overrated. Most languages guarantee that their compilers will halt; not so C++. Does this diminish C++ as a language? I don't think so; it has many flaws, but compiles that don't always halt aren't one of them.
I don't know how hard people think it is, so I can't say if it is easier. However, you are right in your observation that undecidability of a problem (in general) does not mean that all instances of that problem are undecidable. For instance, I can easily tell you that a program like while false do something terminates (assuming the obvious semantics of the while and false).
Projects like the Terminator project you mentioned obviously exist (and probably even work in some cases), so it is clear that not all is hopeless. There is also a contest (I believe every year) for tools that try to prove termination for rewrite systems, which are basically a model of computation. But it is the case that termination in many cases is very hard to prove.
The easiest way to look at it is perhaps to see the undecidability as a maximum on the complexity of instantiations of a problem. Each instantiation is somewhere on the scale of trivial to this maximum and with a higher maximum you typically have that the instantiations are harder on average as well.
The fact that a problem is undecidable does not mean that it is not interesting: on the contrary! So yes, the fact that we do not have an effective and uniform procedure to address termination for all programs (as well as many other problems about software) does not mean that it is not worth to look for partial solutions. In a sense, this is why we need software engineering: because we cannot just delegate the task to computers.
The title of your question is, however, a bit misleading. I agree with DrPizza: the termination problem is exactly as difficult as people think.
Moreover, the fact that we do not necessarily have to worry with the general case does not imply that the termination problem is overrated: it is worth to look for partial solutions beacuse we know that the general solution is hard.
Finally, the issues about dependent types and subrecursive languages, although partially related, are really different questions, and I am not sure to see the point to mix them all together.
001 int D(int (*x)())
002 {
003 int Halt_Status = H(x, x);
004 if (Halt_Status)
005 HERE: goto HERE;
006 return Halt_Status;
007 }
008
009 int main()
010 {
011 Output("Input_Halts = ", H(D,D));
012 }
H correctly predicts that D(D) will never stop running unless H aborts its simulation of its input.
(a) If simulating halt decider H correctly simulates its input D until H correctly determines that its simulated D could not possibly reach its own "return" statement in a finite number of simulated steps then:
(b) H can abort its simulation of D and correctly report that D specifies a non-halting sequence of configurations.
When it is understood that (b) is a necessary consequence of (a) and we can see that (a) has been met then we understand that H(D,D) could correctly determine the halt status of its otherwise "impossible" input.
Simulating halt deciders applied to the halting theorem
The above is fully operational code in the x86utm operating system.
Because H correctly detects that D correctly simulated by H would continue to call H(D,D) never reaching its own "return" statement H aborts it simulation of D and returns 0 to main() on line 011.
I finally have agreement on this key point:
H(D,D) does correctly compute the mapping from its input to its reject
state on the basis that H correctly predicts that D correctly
simulated by H would never halt.
I am the original author of this work and anything that you find on the internet about this was written by me.