Related
I often read about the importance of readability and maintainability. Or, I read very strong opinions about which syntax features are bad or good. Or discussions about the values of certain paradigms, like OOP.
Aside from that, this same question floats about in my mind whenever I read debates on SO or Meta about subjective questions. Or read questions about best practices and sometimes find myself or others disagreeing.
What role does subjectiveness play within the programming realm?
Sometimes I think it plays a large role. Software developers are engineers in a way, but also people. A large part of programming is dealing with code that's human readable. This is very different from Math or Physics or other disciplines with very exact and structured rules. Here the exact structure and rules are largely up in the air, changeable on a whim, and hence the amount of languages in existence. And one person may find one language very readable, and another person may find their own language the most comforting.
The same with practices. One person may not like certain accepted practices. I myself find splitting classes into different files very unreadable, for instance.
But, I can't say rules haven't helped in general. Certain practices have and do make life easier. And new languages have given rise to syntax and structure that make life easier. There's certainly been a progression towards code that is easier to read and maintain even given a largely diverse group of people. So maybe these things aren't as subjective as I thought.
It reminds me, in a way, of UI design. Certainly it's subjective, but then there's an entire discipline involved in crafting good UI and it tends to work.
Is there something non-subjective about the ideas behind maintainability, readability, and other best practices? Is there something tangible to grasp when one develops a new language or thinks of new practices?
Arguably your question is really about the distinction between programming, which is mathematical, algorithmic and scientific, and software engineering, which is subjective, variable and human-focused.
Great programmers are not necessarily great software engineers, and vice versa. The two skillsets, while not exclusive by any means, have less overlap than they appear at first. Their relative importance depends a lot on the project: a brilliant programmer working alone can turn out amazing examples of technical genius, and it doesn't matter that nobody else can understand or maintain it, because he's not going to share the code anyway. But move into an enterprise environment -- like corporate in-house software development -- and I'll gladly trade you ten "cave troll" geniuses for a mediocre programmer who understands the importance of readability and documentation.
It's been my experience that the world needs great software engineers more than it needs great programmers. Relatively few people in this day and age are writing software which is truly performance-critical (OS kernels, compilers, graphics engines, realtime embedded systems, etc), and the Internet allows mediocre programmers to quickly grab algorithmic solutions for problems they couldn't solve alone. But nearly everyone writing professional code has to work within a team. And team productivity rises and falls dramatically on the ability of its members to communicate effectively and distribute workload efficiently, two skills which are highly subjective and impossible to prove by rigid formula.
Most software engineering principles are built on experience rather than objective law. Much like the social sciences, we study, learn, adapt and apply -- but with no real guarantees of outcome. All we can say is that some things seem to work better than others in most groups.
I think, a lot of it is necessarily determined by how much our mind is able to process at one time. So it comes down to how much the language and tools enable a team or a developer to break down the problem into chunks that are meaningful by themselves, but not so large that it becomes too hard to grasp them. The common theme is the art of organizing information (in this case, the code, the logic, ...) But that's not so different from Maths or Physics, by the way.
Just as the best authors borrow from many styles, the best programmers keep a huge range of patterns in their mental arsenal. Slavishly following a few patterns and adhering to some absolute truth is both lazy and dangerous.
Put it another way, the day we rely on robots for code review is the day I quit.
It all depends on your point of view :-)
But to answer your questions, I think one way to view subjectivity is to recognize that software languages, tools, and best practices are a shared means of communication among individuals. Yes, a programming language is a formal way of instructing a computer how to behave, but a programming language may also be viewed as a way to define and communicate specifications to a high level of detail (the code is the ultimate spec, is it not?).
So as far as we may want to concern ourselves with the degree of subjectivity in software languages, tools, and best practices, I would say that the lack of subjectivity may indicate how well communication is facilitated.
Yes, individuals have certain proclivities that are expressed in their habits and tendencies, but that should not ultimately matter too much in the perfect platform for development.
Turning to my Maths PhD wife I asked if there's any subjectivity in mathematics. Her answer is yes there is, mainly in the way we as humans achieve the answer.
If a mathematical proof is the result, how you get to that result can vary. If the dataset is large you may need to use a computer, which can introduce errors, and thus debated about whether that is the right approach. Or sometimes mathematicians can disagree on the theory - one is trying to prove that x is true while the other is trying to prove that x is false.
I think the same thing exists in computer science. A correct answer is a program that runs correctly, but that definition of correct may be different for each project. Sometimes correct means no bugs. Sometimes it means running efficiently.
From here programmers can argue how best to achieve the "correct" result. A good example of this is is the FizzBuzz application. A simple answer would be just a for loop, but Enterprise FizzBuzz is also "correct" in that it produces the correct answer, but is generally laughed at as "bad" engineering due to its overcomplication of the idea (it was a joke app after all).
How large a role does subjectiveness play in programming? I'd say it's a very large part of what we do, simply because we are human, and because there are multiple ways of getting the "correct" answer so there is disagreement over which way is the best.
Studies have been done showing that certain practices reduce defect rates in software. For instance, a study found a strong correlation between cyclomatic complexity and the probability of being fault-prone. Other studies show the average effectiveness of design and code inspections are 55 and 60 percent. So it appears to be in our best interests to favor simplicity, check metrics, and do code reviews.
We're talking probabilities here, though. If I review your code, I'm not guaranteed to find 60% of your bugs. There are also few absolutes in software development; experienced developers know that the correct answer is generally "it depends." That said, there are a number of practices with objective data in their favor.
I took a glimpse on Hoare Logic in college. What we did was really simple. Most of what I did was proving the correctness of simple programs consisting of while loops, if statements, and sequence of instructions, but nothing more. These methods seem very useful!
Are formal methods used in industry widely?
Are these methods used to prove mission-critical software?
Well, Sir Tony Hoare joined Microsoft Research about 10 years ago, and one of the things he started was a formal verification of the Windows NT kernel. Indeed, this was one of the reasons for the long delay of Windows Vista: starting with Vista, large parts of the kernel are actually formally verified wrt. to certain properties like absence of deadlocks, absence of information leaks etc.
This is certainly not typical, but it is probably the single most important application of formal program verification, in terms of its impact (after all, almost every human being is in some way, shape or form affected by a computer running Windows).
This is a question close to my heart (I'm a researcher in Software Verification using formal logics), so you'll probably not be surprised when I say I think these techniques have a useful place, and are not yet used enough in the industry.
There are many levels of "formal methods", so I'll assume you mean those resting on a rigourous mathematical basis (as opposed to, say, following some 6-Sigma style process). Some types of formal methods have had great success - type systems being one example. Static analysis tools based on data flow analysis are also popular, model checking is almost ubiquitous in hardware design, and computational models like Pi-Calculus and CCS seem to be inspiring some real change in practical language design for concurrency. Termination analysis is one that's had a lot of press recently - The SDV project at Microsoft and work by Byron Cook are recent examples of research/practice crossover in formal methods.
Hoare Reasoning has not, so far, made great inroads in the industry - this is for more reasons than I can list, but I suspect is mostly around the complexity of writing then proving specifications for real programs (they tend to get big, and fail to express properties of many real world environments). Various sub-fields in this type of reasoning are now making big inroads into these problems - Separation Logic being one.
This is partially the nature of ongoing (hard) research. But I must confess that we, as theorists, have entirely failed to educate the industry on why our techniques are useful, to keep them relevant to industry needs, and to make them approachable to software developers. At some level, that's not our problem - we're researchers, often mathematicians, and practical usage is not foremost in our minds. Also, the techniques being developed are often too embryonic for use in large scale systems - we work on small programs, on simplified systems, get the math working, and move on. I don't much buy these excuses though - we should be more active in pushing our ideas, and getting a feedback loop between the industry and our work (one of the main reasons I went back to research).
It's probably a good idea for me to resurrect my weblog, and make some more posts on this stuff...
I cannot comment much on mission-critical software, although I know that the avionics industry uses a wide variety of techniques to validate software, including Hoare-style methods.
Formal methods have suffered because early advocates like Edsger Dijkstra insisted that they ought to be used everywhere. Neither the formalisms nor the software support were up to the job. More sensible advocates believe that these methods should be used on problems that are hard. They are not widely used in industry, but adoption is increasing. Probably the greatest inroads have been in the use of formal methods to check safety properties of software. Some of my favorite examples are the SPIN model checker and George Necula's proof-carrying code.
Moving away from practice and into research, Microsoft's Singularity operating-system project is about using formal methods to provide safety guarantees that ordinarily require hardware support. This in turn leads to faster performance and stronger guarantees. For example, in singularity they have proved that if a third-party device driver is allowed into the system (which means basic verification conditions have been proved), then it cannot possibly bring down that whole OS–he worst it can do is hose its own device.
Formal methods are not yet widely used in industry, but they are more widely used than they were 20 years ago, and 20 years from now they will be more widely used still. So you are future-proofed :-)
Yes, they are used, but not widely in all areas. There are more methods than just hoare logic, some are used more, some less, depending on suitability for given task. The common problem is that sofware is biiiiiiig and verifying that all of it is correct is still too hard a problem.
For example the theorem-prover (a software that aids humans in proving program correctness) ACL2 has been used to prove that a certain floating-point processing unit does not have a certain type of bug. It was a big task, so this technique is not too common.
Model checking, another kind of formal verification, is used rather widely nowadays, for example Microsoft provides a type of model checker in the driver development kit and it can be used to verify the driver for a set of common bugs. Model checkers are also often used in verifying hardware circuits.
Rigorous testing can be also thought of as formal verification - there are some formal specifications of which paths of program should be tested and so on.
"Are formal methods used in industry?"
Yes.
The assert statement in many programming languages is related to formal methods for verifying a program.
"Are formal methods used in industry widely ?"
No.
"Are these methods used to prove mission-critical software ?"
Sometimes. More often, they're used to prove that the software is secure. More formally, they're used to prove certain security-related assertions about the software.
There are two different approaches to formal methods in the industry.
One approach is to change the development process completely. The Z notation and the B method that were mentioned are in this first category. B was applied to the development of the driverless subway line 14 in Paris (if you get a chance, climb in the front wagon. It's not often that you get a chance to see the rails in front of you).
Another, more incremental, approach is to preserve the existing development and verification processes and to replace only one of the verification tasks at a time by a new method. This is very attractive but it means developing static analysis tools for exiting, used languages that are often not easy to analyse (because they were not designed to be).
If you go to (for instance)
http://dblp.uni-trier.de/db/indices/a-tree/d/Delmas:David.html
(sorry, only one hyperlink allowed for new users :( )
you will find instances of practical applications of formal methods to the verification of C programs (with static analyzers Astrée, Caveat, Fluctuat, Frama-C) and binary code (with tools from AbsInt GmbH).
By the way, since you mentioned Hoare Logic, in the above list of tools, only Caveat is based on Hoare logic (and Frama-C has a Hoare logic plug-in). The others rely on abstract interpretation, a different technique with a more automatic approach.
My area of expertise is the use of formal methods for static code analysis to show that software is free of run-time errors. This is implemented using a formal methods technique known "abstract interpretation". The technique essentially enables you to prove certain atributes of a s/w program. E.g. prove that a+b will not overflow or x/(x-y) will not result in a divide by zero. An example static analysis tool that uses this technique is Polyspace.
With respect to your question: "Are formal methods used in industry widely?" and "Are these methods used to prove mission-critical software?"
The answer is yes. This opinion is based on my experience and supporting the Polyspace tool for industries that rely on the use of embedded software to control safety critical systems such as electronic throttle in an automobile, braking system for a train, jet engine controller, drug delivery infusion pump, etc. These industries do indeed use these types of formal methods tools.
I don't believe all 100% of these industry segments are using these tools, but the use is increasing. My opinion is that the Aerospace and Automotive industries lead with the Medical Device industry quickly ramping up use.
Polyspace is a a (hideously expensive, but very good) commercial product based on program verification. It's fairly pragmatic, in that it scales up from 'enhanced unit testing that will probably find some bugs' to 'the next three years of your life will be spent showing these 10 files have zero defects'.
It is based more on negative verification ('this program won't corrupt your stack') instead positive verification ('this program will do precisely what these 50 pages of equations say it will').
To add to Jorg's answer, here's an interview with Tony Hoare. The tools Jorg's referring to, I think, are PREfast and PREfix. See here for more information.
Besides of other more procedural approaches, Hoare logic was in the basis of Design by Contract, introduced as an object oriented technique by Bertrand Meyer in Eiffel (see Meyer's article of 1992, page 4). While Design by Contract is not the same as formal verification methods (for one thing, DbC doesn't prove anything until the software is executed), in my opinion it provides a more practical use.
I come from a fairly strong OO background, the benefits of OOD & OOP are second nature to me, but recently I've found myself in a development shop tied to a procedural programming habits. The implementation language has some OOP features, they are not used in optimal ways.
Update: everyone seems to have an opinion about this topic, as do I, but the question was:
Have there been any good comparative studies contrasting the cost of software development using procedural programming languages versus Object Oriented languages?
Some commenters have pointed out the dubious nature of trying to compare apples to oranges, and I agree that it would be very difficult to accurately measure, however not entirely impossible perhaps.
Most all of these questions are confounded by the problem that individual programmer productivity varies by an order of magnitude or more; if you happen to have an OO programmer who is one of the gruop at productivity x, and a "procedural" programmer who is a 10x programmer, the procedural programmer is liable to win even if OO is faster in some sense.
There's also the problem that coding productivity is usually only 10-20 percent of the total effort in a realistic project, so higher productivity doesn't have much impact; even that hypothetical 10x programmer, or an infinitely fast programmer, can't cut the overall effort by more that 10-20 percent.
You might have a look at Fred Brooks' paper "No Silver Bullet".
After poking around with google I found this paper here. The search terms I used are Productivity object oriented.
The opening paragraphs goes on to say
Introduction of object-oriented
technology does not appear to hinder
overall productivity on new large
commercial projects, but it neither
seems to improve it in the first two
product generations. In practice, the
governing influence may be the
business workflow and not the
methodology.
I think you will find that Object Oriented Programming is better in specific circumstances but neutral for everything else. What sold my bosses on converting my company's CAD/CAM application to a object oriented framework is that I precisely showed the exact areas in which it will help. The focus wasn't on the methodology as a whole but how it will help us sold some specific problem we had. For us was having a extensible framework for adding more shapes, reports, and machine controllers, and using collections to remove the memory limitation of the older design.
OO or procedural offer to different way to develop and both can be costly if badly managed.
If we suppose that the works are done by the best person in both case, I think the result might be equal in term of cost.
I believe the cost difference will be on how you will be the maintenance phase where you will need to add features and modify current features. Procedural project are harder to have automatic testing, are less subject to be able to expand without affecting other part and is more harder to understand the concept part by part (because cohesive part aren't grouped together necessary).
So, I think, the OO cost will be lower in the long run compared to Procedural.
i think S.Lott was referring to the "unrepeatable experiment" phenomenon, i.e. you cannot write application X procedurally then rewind time and write it OO to see what the difference is.
you could write the same app twice two different ways, but
you would learn something about the app doing it the first way that would help you in the second way, and
you may be better at OO than at procedural, or vice-versa, depending on your experience and the nature of the application and the tools chosen
so there really is no direct basis for comparison
empirical studies are likewise useless, for similar reasons - different applications, different teams, etc.
paradigm shifts are difficult, and a small percentage of programmers may never make the transition
if you are free to develop your way, then the solution is simple: develop things your way, and when your co-workers notice that you are coding circles around them and your code doesn't break nearly as often etc. and they ask you how you do it, then teach them OOP (along with TDD and any other good practices you may use)
if not, well, it might be time to polish the resume... ;-)
Good idea. A head-to-head comparison. Write application X in a procedural style, and in an OO style and measure something. Cost to develop. Return on Investment.
What does it mean to write the same application in two styles? It would be a different application, wouldn't it? The procedural people would balk that the OO folks were cheating when they used inheritance or messaging or encapsulation.
There can't be such a comparison. There's no basis for comparing two "versions" of an application. It's like asking if apples or oranges are more cost-effective at being fruit.
Having said that, you have to focus on things other folks can actually see.
Time to build something that works.
Rate of bugs and problems.
If your approach is better, you'll be successful, and people will want to know why.
When you explain that OO leads to your success... well... you've won the argument.
The key is time. How long does it take the company to change the design to add new features or fix existing ones. Any study you make should focus on that area.
My company had a event driven procedure oriented design for a CAM software in the mid 90's created using VB3. It was taking a long time to adapt the software to new machines. A long time to test the effects of bug fixes and new features.
With VB6 came along I was able to graph out the current design and a new design that fixed the testing and adaptation problem. The non-technical boss grasped what I was trying doing right away.
The key is to explain WHY OOP will benefit the project. Use things like Refactoring by Fowler and Design Patterns to show how a new design will lower the time to do things. Also include how you get from Point A to Point B. Refactoring will help with showing how you can have working intermediate stages that can be shipped.
I don't think you'll find a study like that. At least you should define what you mean by "cost". Because OOP designing is somehow slower, so on the short term development is maybe faster with procedural programming. On very short term maybe spaghetti coding is even more faster.
But when project begins growing things are opposite, because OOP designing is best featured to manage code complexity.
So in a small project maybe procedural design MAY be cheaper, because it's faster and you don't have drawbacks.
But in a big project you'll get stick very quickly using only a simple paradigm like procedural programming
I doubt you will find a definitive study. As several people have mentioned this is not a reproducible experiment. You will find anecdotal evidence, a lot of it. Some people may find some statistical studies, but I would examine them carefully. I am not aware of any really good ones.
I also will make another point, there is no such thing as purely object oriented or purely procedural in the real world. Many if not most object methods are written with procedural code. At the same time many procedural programs use OO methodologies such as encapsulation (also call abstraction by some).
Don't get me wrong, OO and procedural programs look and are different, but it is a matter of dark gray vs light gray instead of black and white.
This article says nothing about OOP vs Procedural. But I'd think that you could use similar metrics from your company for a discussion.
I find it interesting as my company is starting to explore the ROWE initiative. In our first session, it was apparent that we don't currently capture enough metrics on outcomes.
So you need to focus on 1) Is the maintenance of current processes impeding future development? 2) How are different methods going to affect #1?
Like most developers, I'm a business developer, which in essence consists of slapping a UI onto some back-end data store. (We all know there's a lot more to it than that, but that's usually what it boils down to.)
I understand that game development is very different from business development, but I'm having a hard time explaining it to a friend of mine. I was hoping the SO community could help me out here.
To me, modern game developers deal a lot with manipulating 3-dimensional graphics. In gaming code (and I'm guessing here), you're assembling polygons (or something like that), rotating 'em, etc. This involves a different way of thinking from manipulating relational data (for instance). I don't know, really. I just know it's different.
EDIT:
I should stress that by "development" I mean "programming," not all of the aspects that go into creating a game or piece of business software. I'm sorry I didn't make that clear originally.
Thanks!
I'm in game development but came from business development long ago. Game development is very rigorous in mathematics if you work on the physics or graphics side. Even AI can need quite a bit of mathematics for the low-level stuff. The hardware usually takes care of a lot of the polygon manipulation math as far as drawing on the screen goes. There is also a lot of involvement with generating the in-game data with (often) many tools that are run in a pre-processing step, and that too can be math-intensive if you are generating visibility data.
In terms of programming domains, amongst other things, we deal with:
Graphics programming (including shader development)
Animation
Physics simulation
AI and gameplay
Audio
Networking (typically fairly low-level stuff)
Some of these involve pretty serious maths and algorithms knowledge. On top of all that, we face extremely tough speed constraints, and typically have to be very careful with memory usage too. We face constantly changing hardware, and since we're trying to push hardware to the limit, this can be pretty tough - you can't just abstract it away. Most game development is still quite low-level C++ work. We probably deal with databases less than most other programmers nowadays (although online games are changing this)!
Programmers are often the minority on modern game projects: it's all about content creation (animation, modelling, texturing, audio and design). This means many game programmers are dedicated to making the content creation process efficient, rather than working on the game code itself. This work may have more relaxed speed and memory constraints, although it does have to deal with massive data sets.
Making the game 'fun' is one of the hardest things to do - in business terminology, it "means extremely unstable requirements" as the designers constantly change their mind about how things should work, to chase down that elusive fun factor.
Finally, games are generally a ship-once, no chance to fix stuff kind of deal. This actually means there's very little code maintenance involved, so traditionally there may have been less attention paid to code quality issues. This is changing now with the growth in post-launch content addition, online gaming and the sheer size of modern projects.
Overall it's an incredibly exciting field to be in, the downside is that it's often less well paid (because it's a very tough business financially for developers, and because it's popular, there's always a fresh supply of people looking for jobs).
Just some random thoughts about what is different in game development. Note that there might be some sarcasm in it, though I tried to suppress the urge.
Unless you're a lucky employee of one of those new-style studios (like Eidos Montreal or Blizzard), there is always a deadline to fear that is much too short. In business programming, you mostly make the deadline up for yourself.
A business application serves some specific need. A game's intent is to entertain people. You can't really predict if a game will fail until it's out.
Performance is essential, in every aspect of the game. Writing code that is good to maintain is second priority. In business programming, good code that works is top priority.
For a business application, a shiny UI is a bonus. For a game, it is a must.
Debugging games is much harder, because there is always some hardware dependence which results in bugs that can only be reproduced on some machines, none of which is in your company. And a game sucks up much more performance than a typical business application.
You have people dedicated to creating the art, story, music, sound, background and design, none of which necessarily needs programming knowledge (scripting is a little different), i.e. you have a lot of content which is what the users (players) will see. Nobody cares about how good your code is, unless performance is bad or there are bugs. The others get the praise.
For larger games, you have programmers dedicated just to 3D graphics, networking, audio, tools, scripting, physics and so on. Most of them are highly specialized and each of them can lead the game into a disaster. You'll only need advanced math skills if you're the graphics or physics guy. Well, or AI.
Most games are fire-and-forget, apart from some bugfixes, unless it's one of the more successful games, which get an expansion pack or a sequel.
Security is an important issue for online games, since there are much more annoying people trying to to put people off than there are for business applications, many of which are for (more or less) internal uses at the customer.
You are expected to work much more than when writing business applications.
To land a job for an AAA title, you need to have worked on at least three shipped AAA titles (no, no typo here, ever read some job descriptions at Blizzard or LucasArts? :P)
But here come the good things:
You can pretend to work when you're playing games.
And finally, programming games is fun. Priceless.
Business development is generally much more forgiving.
The reason is basically this; usually, people ARE PAID to use business software. People PAY to use game software.
This may sound like it's not answering your question, but it really is. When my boss says "use microsoft word for that document", they're providing the software, and I'm obligated to use micosoft word. And so, when using it, when it decides to renumber all my chapter headings "just because" or a save to disk takes 30 seconds while it resolves OLE references (it's JUST ONE FREAKING EXCEL SPREADSHEET, for heaven's sake!), I just grit my teeth and remind myself I'm getting paid to do this.
Whereas, if I'm in a game, I'm expecting entertainment. I'm expecting the experience to work properly, and smoothly, and cleanly, with no major stutters or problems.
Again, getting down to why this is an issue for programming; those loops and structures in the game had better be DAMN good to make sure there is no major slowdown, no stuttering in the game engine, nothing that makes the consumer who just spent X amount of his hard-earned dollars say "this is a piece of crap" and walk away. With business software, you can get away with that sort of thing; in some ways, it's almost expected. Again, look at the performance of Microsoft Word; if it were a game, it would be laughed out of existence.
I know I sound like I'm picking on Microsoft Word, and I generally am, because I find it to be hideous, but the point is true for so many pieces of software. CAD software is another example. Same basic things going on as in games, but in general it's slow and hard to work with without a lot of training.
The difference comes down to polish, and the level of polish that's expected. Yes, there's generally more flexibility in business software than there is in games; but moreover and more importantly from a coding perspective, the code has GOT to work efficiently and cleanly in a game; business software is, generally, more forgiving of sloppy code.
In a business app, unoptimized and slow algorithms are generally accepted; and while they're never preferable, frequently the business decision gets made to add another feature instead of improving the performance. But in games, performance IS a feature, and one which is make-or-break.
One should have infinite loops, one shouldn't.
One should have infinite loops, one shouldn't. - Rich Bradshaw
Rich is right. Fundamentally, from a coding standpoint, a game loop creates a "frame" of action in which actions are taken based on the state of the game such as controller input, object collisions, etc. This loop repeats infinitely until some state of some game element or input tells it to stop or "quit." This approach keeps the CPU and graphics card pretty busy, hence the market for gamer machines with fast processors and even faster graphics cards.
Business applications do not have an active loop. Instead, they sit idle waiting for an event such as a click, a message from a web service client, an HTTP GET request, etc. Then they respond to the event.
Sure, gaming is generally more geometrically intensive than business applications, but that is not entirely true. Consider image editing, CAD and graphics tools. For many, these are business applications. But for the most part, a business application has to do with querying data, displaying that data, accepting user input, and modifying the data based on user input. In many cases, the business application does this across the network or even the Internet, but it's an apt nutshell.
The skillset and mindset of a business application developer and the game developer is often different. The game developer has a limited number of input constructs to consider in terms of creating a user experience with an unlimited choice of context or "world" if you will. The business developer is the opposite, with a limited set of potential contexts, usually the web page or the basic window, and an unlimited (or nearly so) set of input and data display combinations to create a user experience entirely different than the game developer sets out to achieve.
One big difference between business development and game development is the number of disciplines involved. Most business software is created by a team of developers, who all have the same basic skillset. In contrast, a game is created by a team of game designers, visual artists, 3d modelers, animators, musicians, and developers.
Good points about mathematics and integration of artists and other specialists in the team. In addition, I'd say that:
Game development, to some extend, will be more hardware dependent. In many cases, games are built simultaneously to several platforms and consoles (not to mention cellphones), with different architectures. That is abstracted up to a certain extent, but developers cannot completely avoid this fact.
Game development is often more performance sensitive, or at least the performance requirements are different. You're dealing with real-time experience, so a lot of time is spent optimizing those pesky fps.
In many cases, game development does not care as much about reuse and maintainability. The game engine will probably be reused, but the application code base will probably not live to see v2.0. In the last stretch of a project, there is a lot of quick and dirty debugging going on. If it looks fine to the end user, there's no added value in making an elegant fix two days before the release.
Let's start from the goal - the goal of game development is to create entertaining product. It should be accurate to the extend that it looks good and runs smoothly. The goal of a business software solution is to model a work process. It should be a tool which works fast enough. A stable product, which executes absolutely accurately and secure the tasks it should do.
Since we target different goals, we use different approaches to build a game and a business software. Let's move to the requirements. For a game, the requirements are determined by the game designer. For a software product the business defines the process and the requirements. For a game the requirements are not final - shall we have small cartoon figures or real human models - this does not matter for the game engine for example. But for a software product, the requirements should be strictly followed and cleared to the maximum possible detail before development.
From the different requirements come different software design and development approach. For a game the performance and gameplay are critical and the qualiity of the graphics and sounds (for example) could be reduced just to be compatible with weaker hardware. Also the physical model could be simplified just to run smoother and improve the gameplay. For the business software everything should be exact and cutting features means that your product will not be as functional as designed anymore.
For a game, the security is not important - there is no critical customer data which should be saved. For a business software a good security system should be supplied - starting from data encryption (while saving on data storage or transferring through network) moving through backup system and mentioning (but not last) the compatibility with previous versions.
I could continue with other aspects but I guess this is already too much for one post...
Business software (that isn't shrink-wrap software) can generally be much more poorly written but still considered a commercial success due to the bizarre disconnect between the quality of the product and saleability of the product. Game software, on the other hand, has to actually be good to survive the marketplace.
The bar for quality in specialized business software is generally much lower.
Business software has to be reliable, maintainable, consistent, not be too annoyingly slow, and can build on lots of already written stuff, such as databases, controls, forms etc.
A games programmer often starts with a blank sheet - hardware reference manuals, some documentation about the hardware and usually thin vendor libraries around some advanced hardware that's completely different to the last job.
From this they have to build what you see - and make most of it work within a 20ms time period, reliably, and often within a ridiculously short time period, facing changing requirements and often a very hard deadline, working untold numbers of hours for a comparative pittance.
That's not to mention often having to master some fairly complex mathematics and physics....
Performance is really the difference, from what I can tell.
Technologywise, games are usually Windows/C++ driven.
Game programming has more in common with scientific programming. You are modeling behavioral systems and anticipating results based upon a limited set of input.
Does attempting to develop some sort of game, even just as a hobby during leisure time provide useful (professional) experience or is it a childish waste of time?
I have pursued small personal game projects on and off throughout my programming career. I've found the (often) strict performance requirements and escalating design complexity have taught me some of my most useful programming lessons.
In these projects to name just a few, I very quickly came face to face with: "Everything is fast for small N". I also discovered the hard way about using basic object oriented design principles to manage complexity.
In a field where many technologies and topics can be quite dry/dull, I think hobby game development is important in motivating new (and not so new) developers to brush up on essential skills while having fun at the same time.
This question talks about hobby projects in general, however here I am more interested in game projects specially and how valuable they are to professional programmers.
You can learn a lot from game development. Game development requires a discipline that you can't find in other programming projects.
Here are just a small set of things game development has taught me:
Optimization for speed
Sacrificing computational depth for speed
Developing under small constraints of memory
Building a system that works like an operating system but is geared toward speed.
Keeping hundreds to thousands of objects in a tree, each with their own unique characteristics
Some areas of game development have great academic value (like Artificial Intelligence, Procedural Algorithms, etc)
It doesn't matter how much of a hack the code is, as long as the gameplay is there. Translating this to other disciplines, the objective of programming is to make the customers happy, regardless of how clever or ugly your code is.
Because game programmers are forced to use less resources, they become better programmers.
I think extra-curricular development is a good thing whether it is games or not. For a start you can try different technologies and development platforms and it is a great way of keeping your skills up to date. I prefer mathematical algorithms to games, but that's just a preference which doesn;t speak to the value of doing it at all.
If you bring any of this back to your employer then they are getting the benefit of your broadening knowledge which you are gaining on your own time. That is good for them too.
I say code away to your heart's content!
Games have some of the most complicated processing that's "agreeable" to a layman, hobbyist programmer. In high school, all I wrote were games. Want to explore physics? Write a game. 3D graphics? Write a game. High performance computing? Write a game. AI? Economics? Military Strategy? Natural Language Processing? Theorem Proving? Write a game.
You don't have to publish it, you don't have to document it, you don't even have to PLAY it, you just need to fiddle with it, and you'll learn any algorithm you find interesting as you try to apply it -- in a game.
Games are interesting because of the wide domain they can cover. Everything else is just data processing, and you can do that at work!
Game development (or any other sort of personal programming) is a good way to:
learn new languages
learn new concepts (TDD, OO, etc..)
Use and evaluate different tools/technologies (CI, automated tests, etc...)
These sorts of projects give you the freedom to explore different aspects of the programming world that you are not able to do at work. If you are stuck doing line of business applications at work, you probably won't deal with a physics engine, or spacial rendering. But you could explore these subjects in your game.
This would also provide you with a good portfolio of code you can bring if/when you interview for new positions. Assuming the code you write is in decent shape...
If you are looking to get a job in game development, you should absolutely be doing some hobby development on the side while you look. Being able to send a more-or-less complete game along with your resume makes it stand out from the crowd. When we list a game programming job we get a ton of resumes, and while I'm thrilled to hire people with no industry experience to fill them, it's kind of hard to pick between all the options.
Sitting down at a problem and solving it with the tools at hand (whatever the problem is, fixing a database, programming an interface or making chess in ascii) is helping you becoming a better programmer.
Hands down.
I think more importantly, is the hobby game development making you happy?
Many areas of game development can absolutely be applicable on a more professional level, but if that's the only reason why you're developing games as a hobby then you may want to re-evaluate the situation and perhaps put your efforts toward various open source projects that could proudly be displayed on a resume (not that hobby game developing couldn't) or discuss at an interview.
Your hobbies should be something you love doing and if you love game development, then absolutely stick with it and hey, maybe you'll find yourself doing it professionally which ultimately seems like the ideal situation for your scenario.
I don't see how anything that enables you learn, practise and experiment could be considered 'childish'.
Besides, if you are aiming to produce a decent (even 'professional') game, it will almost certainly require learning and mastering skills that are directly transferable to may 'conventional' roles. Optimisation, testing, cross platform working, UI design and usability... the list goes on.
YES
I taught myself C (and Psion's proprietary OO extensions) using the Psion Series 3 TopSpeed C SDK and wrote several games which I released under the GNU license. (Previously I had quite a bit of experience in Turbo Pascal and Pascal on the Amiga and Borland C++ 3.1 on Window 3.1 in a financial analysis internship doing signal processing, but when I got the Psion, I have to go back to C and I used K & R to get a good solid basis for that code.)
Then I parlayed the expertise I learned about the Psion platform into a 3 year gig doing mobile development using their industrial handhelds where I gained experience in databases, too. It was a huge turnaround for them, the product was in disarray and I had a ton more experience in the platform than anyone else - just from that year writing games.
I parlayed that into a Windows development gig where I eventually became IT director and got massive experience with SQL Server, Windows, ASP, data centers, DR, you name it.
Then I moved into data warehouse consultancy.
I owe a lot of it back to that first foot in the door where I really was able to make a massive difference to that first company because of the platform experience in C and that particular C-based library system.