What's the difference between game development and business development? - language-agnostic

Like most developers, I'm a business developer, which in essence consists of slapping a UI onto some back-end data store. (We all know there's a lot more to it than that, but that's usually what it boils down to.)
I understand that game development is very different from business development, but I'm having a hard time explaining it to a friend of mine. I was hoping the SO community could help me out here.
To me, modern game developers deal a lot with manipulating 3-dimensional graphics. In gaming code (and I'm guessing here), you're assembling polygons (or something like that), rotating 'em, etc. This involves a different way of thinking from manipulating relational data (for instance). I don't know, really. I just know it's different.
EDIT:
I should stress that by "development" I mean "programming," not all of the aspects that go into creating a game or piece of business software. I'm sorry I didn't make that clear originally.
Thanks!

I'm in game development but came from business development long ago. Game development is very rigorous in mathematics if you work on the physics or graphics side. Even AI can need quite a bit of mathematics for the low-level stuff. The hardware usually takes care of a lot of the polygon manipulation math as far as drawing on the screen goes. There is also a lot of involvement with generating the in-game data with (often) many tools that are run in a pre-processing step, and that too can be math-intensive if you are generating visibility data.

In terms of programming domains, amongst other things, we deal with:
Graphics programming (including shader development)
Animation
Physics simulation
AI and gameplay
Audio
Networking (typically fairly low-level stuff)
Some of these involve pretty serious maths and algorithms knowledge. On top of all that, we face extremely tough speed constraints, and typically have to be very careful with memory usage too. We face constantly changing hardware, and since we're trying to push hardware to the limit, this can be pretty tough - you can't just abstract it away. Most game development is still quite low-level C++ work. We probably deal with databases less than most other programmers nowadays (although online games are changing this)!
Programmers are often the minority on modern game projects: it's all about content creation (animation, modelling, texturing, audio and design). This means many game programmers are dedicated to making the content creation process efficient, rather than working on the game code itself. This work may have more relaxed speed and memory constraints, although it does have to deal with massive data sets.
Making the game 'fun' is one of the hardest things to do - in business terminology, it "means extremely unstable requirements" as the designers constantly change their mind about how things should work, to chase down that elusive fun factor.
Finally, games are generally a ship-once, no chance to fix stuff kind of deal. This actually means there's very little code maintenance involved, so traditionally there may have been less attention paid to code quality issues. This is changing now with the growth in post-launch content addition, online gaming and the sheer size of modern projects.
Overall it's an incredibly exciting field to be in, the downside is that it's often less well paid (because it's a very tough business financially for developers, and because it's popular, there's always a fresh supply of people looking for jobs).

Just some random thoughts about what is different in game development. Note that there might be some sarcasm in it, though I tried to suppress the urge.
Unless you're a lucky employee of one of those new-style studios (like Eidos Montreal or Blizzard), there is always a deadline to fear that is much too short. In business programming, you mostly make the deadline up for yourself.
A business application serves some specific need. A game's intent is to entertain people. You can't really predict if a game will fail until it's out.
Performance is essential, in every aspect of the game. Writing code that is good to maintain is second priority. In business programming, good code that works is top priority.
For a business application, a shiny UI is a bonus. For a game, it is a must.
Debugging games is much harder, because there is always some hardware dependence which results in bugs that can only be reproduced on some machines, none of which is in your company. And a game sucks up much more performance than a typical business application.
You have people dedicated to creating the art, story, music, sound, background and design, none of which necessarily needs programming knowledge (scripting is a little different), i.e. you have a lot of content which is what the users (players) will see. Nobody cares about how good your code is, unless performance is bad or there are bugs. The others get the praise.
For larger games, you have programmers dedicated just to 3D graphics, networking, audio, tools, scripting, physics and so on. Most of them are highly specialized and each of them can lead the game into a disaster. You'll only need advanced math skills if you're the graphics or physics guy. Well, or AI.
Most games are fire-and-forget, apart from some bugfixes, unless it's one of the more successful games, which get an expansion pack or a sequel.
Security is an important issue for online games, since there are much more annoying people trying to to put people off than there are for business applications, many of which are for (more or less) internal uses at the customer.
You are expected to work much more than when writing business applications.
To land a job for an AAA title, you need to have worked on at least three shipped AAA titles (no, no typo here, ever read some job descriptions at Blizzard or LucasArts? :P)
But here come the good things:
You can pretend to work when you're playing games.
And finally, programming games is fun. Priceless.

Business development is generally much more forgiving.
The reason is basically this; usually, people ARE PAID to use business software. People PAY to use game software.
This may sound like it's not answering your question, but it really is. When my boss says "use microsoft word for that document", they're providing the software, and I'm obligated to use micosoft word. And so, when using it, when it decides to renumber all my chapter headings "just because" or a save to disk takes 30 seconds while it resolves OLE references (it's JUST ONE FREAKING EXCEL SPREADSHEET, for heaven's sake!), I just grit my teeth and remind myself I'm getting paid to do this.
Whereas, if I'm in a game, I'm expecting entertainment. I'm expecting the experience to work properly, and smoothly, and cleanly, with no major stutters or problems.
Again, getting down to why this is an issue for programming; those loops and structures in the game had better be DAMN good to make sure there is no major slowdown, no stuttering in the game engine, nothing that makes the consumer who just spent X amount of his hard-earned dollars say "this is a piece of crap" and walk away. With business software, you can get away with that sort of thing; in some ways, it's almost expected. Again, look at the performance of Microsoft Word; if it were a game, it would be laughed out of existence.
I know I sound like I'm picking on Microsoft Word, and I generally am, because I find it to be hideous, but the point is true for so many pieces of software. CAD software is another example. Same basic things going on as in games, but in general it's slow and hard to work with without a lot of training.
The difference comes down to polish, and the level of polish that's expected. Yes, there's generally more flexibility in business software than there is in games; but moreover and more importantly from a coding perspective, the code has GOT to work efficiently and cleanly in a game; business software is, generally, more forgiving of sloppy code.
In a business app, unoptimized and slow algorithms are generally accepted; and while they're never preferable, frequently the business decision gets made to add another feature instead of improving the performance. But in games, performance IS a feature, and one which is make-or-break.

One should have infinite loops, one shouldn't.

One should have infinite loops, one shouldn't. - Rich Bradshaw
Rich is right. Fundamentally, from a coding standpoint, a game loop creates a "frame" of action in which actions are taken based on the state of the game such as controller input, object collisions, etc. This loop repeats infinitely until some state of some game element or input tells it to stop or "quit." This approach keeps the CPU and graphics card pretty busy, hence the market for gamer machines with fast processors and even faster graphics cards.
Business applications do not have an active loop. Instead, they sit idle waiting for an event such as a click, a message from a web service client, an HTTP GET request, etc. Then they respond to the event.
Sure, gaming is generally more geometrically intensive than business applications, but that is not entirely true. Consider image editing, CAD and graphics tools. For many, these are business applications. But for the most part, a business application has to do with querying data, displaying that data, accepting user input, and modifying the data based on user input. In many cases, the business application does this across the network or even the Internet, but it's an apt nutshell.
The skillset and mindset of a business application developer and the game developer is often different. The game developer has a limited number of input constructs to consider in terms of creating a user experience with an unlimited choice of context or "world" if you will. The business developer is the opposite, with a limited set of potential contexts, usually the web page or the basic window, and an unlimited (or nearly so) set of input and data display combinations to create a user experience entirely different than the game developer sets out to achieve.

One big difference between business development and game development is the number of disciplines involved. Most business software is created by a team of developers, who all have the same basic skillset. In contrast, a game is created by a team of game designers, visual artists, 3d modelers, animators, musicians, and developers.

Good points about mathematics and integration of artists and other specialists in the team. In addition, I'd say that:
Game development, to some extend, will be more hardware dependent. In many cases, games are built simultaneously to several platforms and consoles (not to mention cellphones), with different architectures. That is abstracted up to a certain extent, but developers cannot completely avoid this fact.
Game development is often more performance sensitive, or at least the performance requirements are different. You're dealing with real-time experience, so a lot of time is spent optimizing those pesky fps.
In many cases, game development does not care as much about reuse and maintainability. The game engine will probably be reused, but the application code base will probably not live to see v2.0. In the last stretch of a project, there is a lot of quick and dirty debugging going on. If it looks fine to the end user, there's no added value in making an elegant fix two days before the release.

Let's start from the goal - the goal of game development is to create entertaining product. It should be accurate to the extend that it looks good and runs smoothly. The goal of a business software solution is to model a work process. It should be a tool which works fast enough. A stable product, which executes absolutely accurately and secure the tasks it should do.
Since we target different goals, we use different approaches to build a game and a business software. Let's move to the requirements. For a game, the requirements are determined by the game designer. For a software product the business defines the process and the requirements. For a game the requirements are not final - shall we have small cartoon figures or real human models - this does not matter for the game engine for example. But for a software product, the requirements should be strictly followed and cleared to the maximum possible detail before development.
From the different requirements come different software design and development approach. For a game the performance and gameplay are critical and the qualiity of the graphics and sounds (for example) could be reduced just to be compatible with weaker hardware. Also the physical model could be simplified just to run smoother and improve the gameplay. For the business software everything should be exact and cutting features means that your product will not be as functional as designed anymore.
For a game, the security is not important - there is no critical customer data which should be saved. For a business software a good security system should be supplied - starting from data encryption (while saving on data storage or transferring through network) moving through backup system and mentioning (but not last) the compatibility with previous versions.
I could continue with other aspects but I guess this is already too much for one post...

Business software (that isn't shrink-wrap software) can generally be much more poorly written but still considered a commercial success due to the bizarre disconnect between the quality of the product and saleability of the product. Game software, on the other hand, has to actually be good to survive the marketplace.
The bar for quality in specialized business software is generally much lower.

Business software has to be reliable, maintainable, consistent, not be too annoyingly slow, and can build on lots of already written stuff, such as databases, controls, forms etc.
A games programmer often starts with a blank sheet - hardware reference manuals, some documentation about the hardware and usually thin vendor libraries around some advanced hardware that's completely different to the last job.
From this they have to build what you see - and make most of it work within a 20ms time period, reliably, and often within a ridiculously short time period, facing changing requirements and often a very hard deadline, working untold numbers of hours for a comparative pittance.
That's not to mention often having to master some fairly complex mathematics and physics....

Performance is really the difference, from what I can tell.
Technologywise, games are usually Windows/C++ driven.

Game programming has more in common with scientific programming. You are modeling behavioral systems and anticipating results based upon a limited set of input.

Related

Transition from business to game programming [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Does anyone have any idea how it would be possible to transition from business to game programming? How would anyone get a start in game programming? It seems much more exciting and rewarding (better paying too?). But it seems like most of the jobs out of school are for business programming. Any advice or insight on how to do it or if it can be done?
This is not a direct answer to your question, "How can I transition?" But instead I'd like to recommend you not go down that road or at least be realistic about what it is like. To back that up I'll quote some stats from the 2004 igda survey on quality of life for game developers:
34.3% of developers expect to leave the industry within 5 years, and
51.2% within 10 years.
Only 3.4% said that their coworkers averaged 10 or
more years of experience.
Crunch time
is omnipresent, during which
respondents work 65 to 80 hours a
week (35.2%). The average crunch work
week exceeds 80 hours (13%). Overtime
is often uncompensated (46.8%).
44%
of developers claim they could use
more people or special skills on
their projects.
Spouses are likely to
respond that "You work too much..."
(61.5%); "You are always stressed
out." (43.5%); "You don't make enough
money." (35.6%).
Contrary to
expectations, more people said that
games were only one of many career
options for them (34%) than said
games were their only choice (32%).
Many years ago I created a couple of game development websites on my own and then was one of the founders of GameDev.net. One of the reasons I did it was to make contacts and get into game development professionally. It definitely worked, a couple of the people who were co-founders have gone on to work in the industry and I'm sure I could have gone that way too but everything I learned about the industry taught me the following things:
There is an endless supply of developers out there who believe game development would be really cool. The people hiring for the industry know this and aren't going to pay you nearly as well because they rely on this basically inexhaustible pool of people.
Many of the developers within the industry may be good with 3D or sound or many other topics but often they are inexperienced with basic software practices that you or I might consider essential. In that category I would put things like source control, test first design, design patterns, etc. Even when they know better the time crunch to get stuff out the door often makes them toss good software practices in a foolish attempt to save time.
Working on a game for two years can quickly become no different than working on any other program for two years. That is, when you have to dig around in the guts of the program day after day and deal with its bugs and only with that one game it's not going to seem all that fun anymore. In fact, you may find yourself playing other games just to get away from it for a while.
What says you are going to be working on Half-Life X or one of the few dozen cool games that come out every year anyway? Remember, somebody is out there building the game that goes with the next Will Farrell movie and it's probably going to be you. Look around at most of the dreck that comes out. That stuff doesn't develop itself.
It's quite a transition! The biggest difference in mindset is moving away from the business world's reliance on abstraction to one where you're focused much more on the particular hardware you're targeting and getting things to run within strict budgets of time and memory. Game programming is a lot more like embedded programming than it is like web programming -- you have to think about the exact memory footprint of everything and CPU time is at a huge premium.
This is even true of being an MMO server programmer, because a) the server has to answer to each client command within 80 milliseconds to feel responsive, and b) the bigger the footprint of the game on the server, the more hardware you have to buy, and that costs real money.
First up you should learn C++ if you haven't already. Almost every game is written in C or C++ these days. Game consoles have strictly limited memory, and if you allocate one byte past 512mb it doesn't slow down, it crashes; at the same time, each trip through the main app loop has to complete in 33ms or less, so we can't rely on garbage collection and smart pointers. The game-development world is one of those special applications where you really need to do all those optimizations that people here say you never need to do any more.
Also, bone up on your math. Games are built out of linear algebra and kinematics -- not just the graphics, but the state of the world and the behavior of every character and object. I like Eric Lengyel's book for game math; it's been my bible for years (though there are many other good ones).
You could try to learn some graphics math and programming as well, but this is sort of a subspecialty now and typically only a couple of people on a game team are really working at the level of DirectX calls. I suspect you're more interested in game logic and server backend, which is less specialized.
Make games. There are many frameworks to help you get started. XNA has the advantage of being a real-world product that actual games are shipped with, unlike PyGame or SDL which are quick to get up and running but have vanishingly little commercial support.
A common route people take transitioning into the game industry is starting as a game development team's Tools Developer.
These tools are often written in higher level languages like C# and are used to aid in the development of the games. For example, you might help maintain or modify the teams in-house map level editor or help develop the tools that convert one media file format into their propriatary file format.
After you have spent some time as a tools developer you can start picking up domain knowledge on the side and naturally transition into one of the many types of game programmers:
Game physics programmer
Artificial intelligence programmer
Graphics programmer
Sound programmer
Gameplay programmer
Scripter
UI programmer
Input programmer
Network programmer
Game tools programmer
Porting programmer
Technology programmer
Lead game programmer
Modern games are often made by huge and highly fractionated teams in terms of roles. So, it might be worth it to pick one of the above specialities and begin studying right away as you attempt to get your foot in the door. It almost goes without saying that you should be proficient in C++ and one of the major graphics libraries (OpenGL, DirectX, etc). Don't stress about learning both, just pick one and learn it. Generally people who know one very well can transition to the other if they need to.
If you're a good programmer you can do it, game programming requires much better understanding of the platforms and tools you use to develop the games.
I think it's much harder to get into the game programming industry than just the regular industry.
The best alternative is to create your own games, get exposure, if you're good enough you'll find your way.
There are many really good competitors though, just check out the many sites that offer free flash games, you can start posting your work to those sites.
The general story is that the way to get into game programming is to start doing it. There's no point in even showing up on a game company's doorstep without a demo of whatever you want them to pay you to do.
If you're coming out of business programming, that'll count against you and be a culture shock besides. The game industry runs on single 20-something males who can be taught that 120-hour work weeks are just how things are done.
I'd recommend keeping your current job for a while and finding a mod team or open source game that could use help. Try to find one that looks likely to create a playable game rather than vanish, at a guess I'd say it will look better on your resume than some test demos both because you'll probably be working on something more advanced than you would at home and it shows you can work in a games team.
I was wondering the same thing!
I've noticed there are a number of good game-programming books at Barnes & Nobles, probably not a bad place to start.
In general though, I'd start looking at books, maybe coding some prototypes, and start looking at what companies are in your area, what tools they use, how you could basically make yourself valuable to them!
By the way, does anyone happen to know if there are particular engines similar to what WOW / Guild Wars are using for MMORPG game development in particular (my main interest)?
I think it is not an easy switch. Game programming is not something you can learn overnight. I suspect an entry level will be quite high if you want more than a graduate salary.
A long time ago I worked in a company that worked in the online gambling field. I then decided I wanted to be engaged in more serious activities.
You really need to answer one question - is that what you want because you like it or just because it seems to be rewarding right now. In the latter case you'll need to understand you'll be plating catch-up and noone can promise the salaries will be as high as you wish by the time you get your skills high.
Maybe consider some certifications/training so that you can step-up your current career position in business programming?
If however it is what you really want then just follow you heart. Show your interest and commitment, potential employers will notice it and hopefully prefer you over a guy who has applied just because the salary was looking attractive.
On the other hand, businees developers/consultants (for example in SAP world) earn quite a generous ransom.

How is web programming different from back-end programming?

I have worked on single threaded business logic/back-end programming for most of my career. I now wish to learn web programming but would like to know how web programming is different from non-GUI programming (e.g. writing an API or a file processing application). I am not talking about the GUI design aspects (someone has already asked that question here) but more about programming complexity.
On the few occasions when I have worked on a web application, I felt that web applications are relatively more non-deterministic and unpredictable (for example, due to the event driven, multi-threaded model of web applications, there are several permutations and combinations of events and actions one needs to take care of) .
What would you say are some of the basic features of web programming that makes it different from non-GUI applications? What are the pitfalls/mistakes a back-end developer might commit while working on web applications?
EDIT
My definition of back-end programming means non-GUI applications like an API or a file processing batch application that parses a large data file, reads the records, does a lot of number crunching calculations on the data and spews out the results into another file or database. Anothe example could be a library of date and time utilities.
The biggest challenge with web programming is dealing with state. HTTP is a stateless protocol. This can make maintaining state more challenging than in a desktop application. Web applications tend to have a different life cycle due to this. Each web development platform deals with this somewhat differently, but they all need to deal with it in some way.
Web applications generally feel like single threaded applications, as you - the application developer - rarely create threads of your own. If anything, it's actually a lot easier, because the stateless nature of the web transactions means that you have to load the data for the page each time from the database. Therefore, you don't have to worry about concurrency, since 'whatever is there' is usually good enough.
The biggest problem with Web development is all of the background knowledge that you have to accumulate over time. How do you lay out web pages? How do you style things with CSS? How do you get parameters from the query string? How do you validate a field value in JavaScript? All of those things are actually really easy to learn, but there's just so many of them that it can be a real pain.
The biggest pitfalls I've witnessed Application developers make when moving into Web is not considering the costs of their code. Either they abuse MySQL to much too the point of bogging the RDBMS down, they write code that uses too much memory, or they make front end pages that are to big to fit in dialup/cellphones or low end broadband/dsl pipeline.
Sometimes it can't be avoid in writing a heavy duty page, but considerations can be made to attempt to cache as much as possible or when writing a page that will be hit a lot they will make no effort to profile and optimize queries before they go out the door.
Its not that these people are stupid, just a lack of experience and awareness that they need to play nice and write code that's somewhat lean.
Back-end programming is infinitely easier than web programming. (You have been warned!) Web programming is the easiest to show off to everybody.
Most web sites have a back end component as well. Typical structure will be something like:
UI - html/css/javascript
Controller - if using MVC
Business Logic/Services - this is backend
Database - this is also backend
So building web sites will still mean a lot of back end work. In regards to the UI, the main difference is that you will need to have a good eye for design and layout to do it well. The html/css technology is pretty simple in itself.
HTML was actually developed to deliver physics papers. You can still see it in some of the old meta tags. At any rate the difference is web programming is stateless and thick client development is not.
As you have adeptly indicated, its all driven by events. True javascript has mucked up web development a bit by creating the illusion of a stateful enviornment but in the end everything comes down to simple HTML.
Its never too late to start learning, I would say start making some static HTML pages and move your way up to an MVC Framework, I suggest Microsoft MVC Framework. Its pretty fantastic, there are others you could use like ASP.Net Webforms but you won't learn anything by dragging and dropping things onto a designer ;).
web & GUI applications interface with humans .. back-end applications interface with services and databases .. As such your specifications need to include significant consideration of your user's mental model - making things behave as people expect them to. And doing that - understanding how users think - is not always easy or logical. You may have elegant algorithmic solutions that simply fail to engage, because people don't always think logically. Many times, quite elegant UI's are extremely twisted coding-wise .. which is very contrary to system->system programming
Depending on problem-space, much of this can be more art than science.
One consideration (amongst many) with web programming is that users won't just be stupid (not that they all are, but you always have to factor that in), they will sometimes (assume always) be downright malicious and nasty, and will do everything in their power to destroy your application, your database, your weekends, your sanity...
Be as paranoid as a very small nun at a penguin shoot. Do not trust your users.
Another consideration is that Back End programming as per your definition is easier to test.
Once you begin web programming you're at the mercy of the various browsers' different interpretations of the same code. Plus the user, with inputs of mouse and keyboard, has a variety of ways to break what you produce.
Web programming isn't back-end programing. It shows stuff on the front end, the web.
Are you defining it otherwise?
EDIT
Web programming pulls you into presenting data consistently, visually, to everyone. Back end coding means constructing that data, in the same way for presentation, but not presenting it.
Based on your definition of "back end programming," your question applies not only to web applications, but to any GUI application.
It kind of depends what kind of GUI application we're talking about. For example:
Internal business applications tend to involve lots of business process workflow logic, record keeping, and interoperability between separate systems. No fancy alorithms or number crunching are needed. Your audience is limited, so performance is not a big deal, but cross-platform compatibility is important so these tend to be web applications. Your main concerns are making it easy to tie business sytems together, and keeping the API layered to ensure that the GUI code does not have to deal with any of the business logic code.
Public web sites (such as this one) tend to involve less of a formal architecture, and more of a mentality of "just get this cool feature to work so we can get more visitors." Again, no number crunching or algorithms unless performance is an issue. Performance is more of an issue for massively popular sites like Slashdot or Google, so if you anticipate rapid growth it pays to design for scalability in advance.
Public e-commerce web sites are kind of like both of the above: features and performance are important, but equally important is the structured architecture underneath it that ties all of the commerce business systems together (purchasing, supplier, shopping cart, payment gateways, etc.)
For the actual GUI portion, the complexity of the application kind of determines how complex the GUI code will be. For highly complex, nested GUIs where your requirements change often, it's easy to fall into the trap of putting too much GUI stuff into one page. Soon the page exceeds most people's complexity threshold, making the page very difficult to maintain. It pays to think in advance how you can separate different portions of the GUI into separate components, and then tie them together. If you're new to GUI programming, read some articles on the Model-View-Controller (MVC) pattern.
For simple web sites, where most pages are fairly static, this issue doesn't come up so much because each individual page is easy to maintain.
Most web programming is done in the style popular in the early seventies, before Dijkstra's 'goto considered harmful' was well-known.

Development Cost of Procedural Programming vs. OOP?

I come from a fairly strong OO background, the benefits of OOD & OOP are second nature to me, but recently I've found myself in a development shop tied to a procedural programming habits. The implementation language has some OOP features, they are not used in optimal ways.
Update: everyone seems to have an opinion about this topic, as do I, but the question was:
Have there been any good comparative studies contrasting the cost of software development using procedural programming languages versus Object Oriented languages?
Some commenters have pointed out the dubious nature of trying to compare apples to oranges, and I agree that it would be very difficult to accurately measure, however not entirely impossible perhaps.
Most all of these questions are confounded by the problem that individual programmer productivity varies by an order of magnitude or more; if you happen to have an OO programmer who is one of the gruop at productivity x, and a "procedural" programmer who is a 10x programmer, the procedural programmer is liable to win even if OO is faster in some sense.
There's also the problem that coding productivity is usually only 10-20 percent of the total effort in a realistic project, so higher productivity doesn't have much impact; even that hypothetical 10x programmer, or an infinitely fast programmer, can't cut the overall effort by more that 10-20 percent.
You might have a look at Fred Brooks' paper "No Silver Bullet".
After poking around with google I found this paper here. The search terms I used are Productivity object oriented.
The opening paragraphs goes on to say
Introduction of object-oriented
technology does not appear to hinder
overall productivity on new large
commercial projects, but it neither
seems to improve it in the first two
product generations. In practice, the
governing influence may be the
business workflow and not the
methodology.
I think you will find that Object Oriented Programming is better in specific circumstances but neutral for everything else. What sold my bosses on converting my company's CAD/CAM application to a object oriented framework is that I precisely showed the exact areas in which it will help. The focus wasn't on the methodology as a whole but how it will help us sold some specific problem we had. For us was having a extensible framework for adding more shapes, reports, and machine controllers, and using collections to remove the memory limitation of the older design.
OO or procedural offer to different way to develop and both can be costly if badly managed.
If we suppose that the works are done by the best person in both case, I think the result might be equal in term of cost.
I believe the cost difference will be on how you will be the maintenance phase where you will need to add features and modify current features. Procedural project are harder to have automatic testing, are less subject to be able to expand without affecting other part and is more harder to understand the concept part by part (because cohesive part aren't grouped together necessary).
So, I think, the OO cost will be lower in the long run compared to Procedural.
i think S.Lott was referring to the "unrepeatable experiment" phenomenon, i.e. you cannot write application X procedurally then rewind time and write it OO to see what the difference is.
you could write the same app twice two different ways, but
you would learn something about the app doing it the first way that would help you in the second way, and
you may be better at OO than at procedural, or vice-versa, depending on your experience and the nature of the application and the tools chosen
so there really is no direct basis for comparison
empirical studies are likewise useless, for similar reasons - different applications, different teams, etc.
paradigm shifts are difficult, and a small percentage of programmers may never make the transition
if you are free to develop your way, then the solution is simple: develop things your way, and when your co-workers notice that you are coding circles around them and your code doesn't break nearly as often etc. and they ask you how you do it, then teach them OOP (along with TDD and any other good practices you may use)
if not, well, it might be time to polish the resume... ;-)
Good idea. A head-to-head comparison. Write application X in a procedural style, and in an OO style and measure something. Cost to develop. Return on Investment.
What does it mean to write the same application in two styles? It would be a different application, wouldn't it? The procedural people would balk that the OO folks were cheating when they used inheritance or messaging or encapsulation.
There can't be such a comparison. There's no basis for comparing two "versions" of an application. It's like asking if apples or oranges are more cost-effective at being fruit.
Having said that, you have to focus on things other folks can actually see.
Time to build something that works.
Rate of bugs and problems.
If your approach is better, you'll be successful, and people will want to know why.
When you explain that OO leads to your success... well... you've won the argument.
The key is time. How long does it take the company to change the design to add new features or fix existing ones. Any study you make should focus on that area.
My company had a event driven procedure oriented design for a CAM software in the mid 90's created using VB3. It was taking a long time to adapt the software to new machines. A long time to test the effects of bug fixes and new features.
With VB6 came along I was able to graph out the current design and a new design that fixed the testing and adaptation problem. The non-technical boss grasped what I was trying doing right away.
The key is to explain WHY OOP will benefit the project. Use things like Refactoring by Fowler and Design Patterns to show how a new design will lower the time to do things. Also include how you get from Point A to Point B. Refactoring will help with showing how you can have working intermediate stages that can be shipped.
I don't think you'll find a study like that. At least you should define what you mean by "cost". Because OOP designing is somehow slower, so on the short term development is maybe faster with procedural programming. On very short term maybe spaghetti coding is even more faster.
But when project begins growing things are opposite, because OOP designing is best featured to manage code complexity.
So in a small project maybe procedural design MAY be cheaper, because it's faster and you don't have drawbacks.
But in a big project you'll get stick very quickly using only a simple paradigm like procedural programming
I doubt you will find a definitive study. As several people have mentioned this is not a reproducible experiment. You will find anecdotal evidence, a lot of it. Some people may find some statistical studies, but I would examine them carefully. I am not aware of any really good ones.
I also will make another point, there is no such thing as purely object oriented or purely procedural in the real world. Many if not most object methods are written with procedural code. At the same time many procedural programs use OO methodologies such as encapsulation (also call abstraction by some).
Don't get me wrong, OO and procedural programs look and are different, but it is a matter of dark gray vs light gray instead of black and white.
This article says nothing about OOP vs Procedural. But I'd think that you could use similar metrics from your company for a discussion.
I find it interesting as my company is starting to explore the ROWE initiative. In our first session, it was apparent that we don't currently capture enough metrics on outcomes.
So you need to focus on 1) Is the maintenance of current processes impeding future development? 2) How are different methods going to affect #1?

The value of hobby game development

Does attempting to develop some sort of game, even just as a hobby during leisure time provide useful (professional) experience or is it a childish waste of time?
I have pursued small personal game projects on and off throughout my programming career. I've found the (often) strict performance requirements and escalating design complexity have taught me some of my most useful programming lessons.
In these projects to name just a few, I very quickly came face to face with: "Everything is fast for small N". I also discovered the hard way about using basic object oriented design principles to manage complexity.
In a field where many technologies and topics can be quite dry/dull, I think hobby game development is important in motivating new (and not so new) developers to brush up on essential skills while having fun at the same time.
This question talks about hobby projects in general, however here I am more interested in game projects specially and how valuable they are to professional programmers.
You can learn a lot from game development. Game development requires a discipline that you can't find in other programming projects.
Here are just a small set of things game development has taught me:
Optimization for speed
Sacrificing computational depth for speed
Developing under small constraints of memory
Building a system that works like an operating system but is geared toward speed.
Keeping hundreds to thousands of objects in a tree, each with their own unique characteristics
Some areas of game development have great academic value (like Artificial Intelligence, Procedural Algorithms, etc)
It doesn't matter how much of a hack the code is, as long as the gameplay is there. Translating this to other disciplines, the objective of programming is to make the customers happy, regardless of how clever or ugly your code is.
Because game programmers are forced to use less resources, they become better programmers.
I think extra-curricular development is a good thing whether it is games or not. For a start you can try different technologies and development platforms and it is a great way of keeping your skills up to date. I prefer mathematical algorithms to games, but that's just a preference which doesn;t speak to the value of doing it at all.
If you bring any of this back to your employer then they are getting the benefit of your broadening knowledge which you are gaining on your own time. That is good for them too.
I say code away to your heart's content!
Games have some of the most complicated processing that's "agreeable" to a layman, hobbyist programmer. In high school, all I wrote were games. Want to explore physics? Write a game. 3D graphics? Write a game. High performance computing? Write a game. AI? Economics? Military Strategy? Natural Language Processing? Theorem Proving? Write a game.
You don't have to publish it, you don't have to document it, you don't even have to PLAY it, you just need to fiddle with it, and you'll learn any algorithm you find interesting as you try to apply it -- in a game.
Games are interesting because of the wide domain they can cover. Everything else is just data processing, and you can do that at work!
Game development (or any other sort of personal programming) is a good way to:
learn new languages
learn new concepts (TDD, OO, etc..)
Use and evaluate different tools/technologies (CI, automated tests, etc...)
These sorts of projects give you the freedom to explore different aspects of the programming world that you are not able to do at work. If you are stuck doing line of business applications at work, you probably won't deal with a physics engine, or spacial rendering. But you could explore these subjects in your game.
This would also provide you with a good portfolio of code you can bring if/when you interview for new positions. Assuming the code you write is in decent shape...
If you are looking to get a job in game development, you should absolutely be doing some hobby development on the side while you look. Being able to send a more-or-less complete game along with your resume makes it stand out from the crowd. When we list a game programming job we get a ton of resumes, and while I'm thrilled to hire people with no industry experience to fill them, it's kind of hard to pick between all the options.
Sitting down at a problem and solving it with the tools at hand (whatever the problem is, fixing a database, programming an interface or making chess in ascii) is helping you becoming a better programmer.
Hands down.
I think more importantly, is the hobby game development making you happy?
Many areas of game development can absolutely be applicable on a more professional level, but if that's the only reason why you're developing games as a hobby then you may want to re-evaluate the situation and perhaps put your efforts toward various open source projects that could proudly be displayed on a resume (not that hobby game developing couldn't) or discuss at an interview.
Your hobbies should be something you love doing and if you love game development, then absolutely stick with it and hey, maybe you'll find yourself doing it professionally which ultimately seems like the ideal situation for your scenario.
I don't see how anything that enables you learn, practise and experiment could be considered 'childish'.
Besides, if you are aiming to produce a decent (even 'professional') game, it will almost certainly require learning and mastering skills that are directly transferable to may 'conventional' roles. Optimisation, testing, cross platform working, UI design and usability... the list goes on.
YES
I taught myself C (and Psion's proprietary OO extensions) using the Psion Series 3 TopSpeed C SDK and wrote several games which I released under the GNU license. (Previously I had quite a bit of experience in Turbo Pascal and Pascal on the Amiga and Borland C++ 3.1 on Window 3.1 in a financial analysis internship doing signal processing, but when I got the Psion, I have to go back to C and I used K & R to get a good solid basis for that code.)
Then I parlayed the expertise I learned about the Psion platform into a 3 year gig doing mobile development using their industrial handhelds where I gained experience in databases, too. It was a huge turnaround for them, the product was in disarray and I had a ton more experience in the platform than anyone else - just from that year writing games.
I parlayed that into a Windows development gig where I eventually became IT director and got massive experience with SQL Server, Windows, ASP, data centers, DR, you name it.
Then I moved into data warehouse consultancy.
I owe a lot of it back to that first foot in the door where I really was able to make a massive difference to that first company because of the platform experience in C and that particular C-based library system.

Ratio of real code to supporting code

I'm finding only about 30% of my code actually solves problems, the rest is taken up by logging, tests, parameter checking, exceptions, error handling and so on. Do you find that in your code, and is there an IDE/Editor that allows you to hide code that's not interesting?
OTOH are there languages which make the support code more manageable and smaller in size?
Edit - I think we're all aware of the difference between business logic and other code. I'm not saying that the logging etc is not important. The things is, when I'm coding I'm either implementing business logic, or I'm making sure things don't break. For me that's two different ways of thinking, do others develop like that, and is there an IDE that supports that way of developing?
Supporting code is just as important as the "real code". The quality of your product is determined as much by supporting code as anything else.
Consider an automobile. In terms of just getting from point A to point B, that requires nothing more than a go-cart: a frame, a seat, an engine, a few tires. But modern cars have a lot more than just the basics. Highly efficient engines using electronic engine timing. Automatic transmissions. Bucket seats. Heating and A/C. Rack and pinion steering. Power brakes. Anti-lock brakes. Quiet, comfortable cabins protected from the weather. Air bags. Crumple zones and other advanced safety features. Etc. Etc.
Details and execution are important, even in software. If you find that your "supporting code" tends to look more like kludges and hacks, then it's time to rethink your fundamental approach. But ultimately the fit and finish determines quality of the end product as much as anything else.
Edit: The questions you should ask yourself:
Is your "supporting code":
An umbrella duct taped to a pole or a metal and glass cabin frame?
A piece of pipe tied to the front of the car or an energy absorbing bumper integrated into a crumple zone?
A grappling hook on a rope tied to the frame or 4-wheel anti-lock power brakes?
A pair of goggles and a thick coat or a windshield and a heating system?
Answers to these questions will probably affect how much you care about your "supporting code".
Edit: Response to Dave Turvey's comment:
I'd encourage rereading the original question, one of the examples of "support code" listed is "error handling". Consider this for a moment. Imagine it in the context of, say, an automobile, a microwave oven, or even an operating system. Should error handling be relegated to second class citizenship because it serves a "support" function in some abstract sense? In an automobile the safety features are part of the fundamental design of the vehicle and comprise a substantial part of the value of the car. The safety features and "error handling" of a microwave oven (indeed, of the microwave oven's embedded software as well) are an important part of its value as well. A microwave oven that was improperly shielded could cook food just fine, under the right circumstances, but it would pose a hazard to the operator.
The implicit featureset of every tool (software or otherwise) includes this list:
Robustness
Usability
Performance
Everything anyone has ever built or used has had these features. Failure to understand this will translate to failure to execute well on these features which will make for a poor quality product of low value and low commercial interest. There is no such thing as "support code", there is only a misunderstanding of the nature of what it means for a feature to be complete. A "feature" that works in the abstract only under laboratory conditions is an experiment, not a part of a product.
The idea of pure, pristine features floating on a bog of dirty, ugly support code is the wrong image of software development. Instead, think of elegant, superbly-integrated machinery that is well-built, intuitive to use, and powerful.
The supporting code is important, but you want not to be distracted by it when you don't want to. There are two technologies that can help.
A language with first-class functions will help you modularize your code so that logging, timing, and so on can be implemented once and then combined with many other modules. It will also help you write unit tests. Some good ways to learn the techniques are to read the paper Why Functional Programming Matters and to learn about the QuickCheck tool. (No, I am not a shill for John Hughes, but he does do wonderful work.)
If you cannot use a programming language with powerful capabilities for modularization and reuse, or if you don't want to, Don Knuth's Literate Programming technique will help you organize your code so that you can split up parts the way you want and pay attention only to what you want, when you want. The Noweb literate-programming tool supports any language that can be written in ASCII, and also combinations of those languages.
If my IDE could hide "not interesting code" I would definitely turn the feature off. You wouldn't want that happening, I bet :)
There are certainly languages that minimise the amount of supporting code, but I don't think you could switch from Java to lets say JavaScript simply because in JavaScript you wouldn't have to declare every exception... I think it's quite necessary to have your supporting code where it is.
Oh, and you could have your program formally specified and mathematically proven, then you wouldn't need to support your code too much ;D
The real code you are referring to is usually called "Business Logic".
In a good unit testing system, your unit tests should be in their own classes (and probably their own assemblies) so that shouldn't be an issue.
The rest is language based for the most part. The more advanced a language, the better it's ability to avoid writing support code to some degree. Also, a well-targetted development system can help you avoid writing a lot of code (Visual Basic's screen builder, Ruby on Rails, ...) but these abstractions can break down and cause you to write just as much code as anything else if you use it to develop targets outside it's intended types of apps. (VB & Ruby don't help all that much if you're calculating prime numbers)
Beyond the language/platform, you have refactoring--the art of eliminating all the supporting code that you can (as well as redundancies in your business logic) to keep your code-base clean and small.
When practicing advanced refactoring, you'll probably end up writing tools for yourself.
Sometimes abstracting data out of your code and into a structured file of some sort can eliminate huge piles of support code and move the rest into "Business logic" because now parsing that data and setting it up is part of the "business" your program does.
This is a good trade-off because this type of business logic tends to be more readable and easier to factor. The other advantage of this kind of abstraction is that all your "Configuration" is now done in data which tends to make it somebody elses' problem.
As an example of this type of tooling: Rails itself! It takes a lot of the boilerplate of web development and factors it out of the code and into libraries driven by data and simplistic code (Ruby blurs the line between code and data--their data files actually loop back to being specified in Ruby code!)
It's like you want to take a trip to the top of Pike's Peak. You can take the Winnebago, you can take your SUV, or a motorcycle, or ride up on your bike.
Some ways are a more or less expensive, faster, etc. Sometimes you end up taking along a lot of stuff the isn't there strictly for accomplishing vertical progress; it's nice to have a beer in the cooler. But it pays to remember that you're responsible for everything that goes with you to the top.
Aspect Oriented Programming partly addresses this. It allows you to inject code into existing source/bytecode. This way you can make a task such as logging appear in its own module instead of woven into the business logic.
Work expands to fill it's container. This really sounds like an economics question. (ie. optimizing your outputs- features for users and features for the developer) with expensive inputs (time spent writing features, time spent writing plumbing code.)
You have to include user visible features or you don't have a viable product or job. Once that is done partly done, your remaining budget of time will be split between activities with a visible return on effort and an invisible (but positive!) return on effort, like exception logging, memory management, etc.
What ever language makes it cheaper to implement features will probably increase your features/to plumbing code ratio. Likewise, whatever language makes it cheaper to implement plumbing code will probably increase the feature to plumbing code because you'll have freed up more time to write features.
Like all optimization problems you'd have 2 effects-- the increase in the size of the support code (because say, you're using cheap code generation) and the increase in the size of feature related code (because you have more time left over to write features), so the final ratio might be hard to predict.
I do not begrudge the 90% of my code that is data access plumbing, because it is all testable, code generated and very cheap, compared to the 10% of handwritten of domain specific code.
I don't try to make all routines foolproof, only those exposed to the outside world.
http://en.wikipedia.org/wiki/Folding_editor
Higher and more dynamic languages are usually less verbose. Weak typing also saves a lot of code. Of course there are trade-offs.
I use the #region directive in Visual Studio to collapse blocks of code that are not the primary focus, e.g. unit tests. With log4net logging statements are only ever one line. I haven't found any approaches to reduce the parameter checking code although it sounds like C# 4 has some kind of contract framework that will help there.
I have some coworkers who once, while being chewed out by a client for an overdue and bug-ridden project, bragged to the customer that they had written 5 times as much test code as operational code.
The client was not happy, and by "not happy" I mean their skin turned green, they grew to 5 times their normal size, and their clothes popped off.
You could just make a static class in a utilities assembly that checks your parameters and things. For instance in the Spring Framework (which is where I got the idea) it has an Assert class and it makes it really fast to make sure that string params aren't empty or that object params aren't null. It cleans up validation code and reduces duplicate code which is a win win.