Related
You often see that a new game will be released on Xbox 360, PS3 and Windows PC.
How do gaming companies do this? Is it a common source code compiled using different compilers? Are different source codes actually required?
There's an example of this phenomenon described in this news article.
Generally speaking, the vast majority of multiplatform "triple-A" titles are implemented on top of an engine such as Unreal, Source, or other smaller engines. Each of these engines is custom-implemented and optimized for each platforms and may use a lower-level API such as DirectX/OpenGL which in turn uses the console. Each of these engines also has plug-ins for platform specific stuff (e.g., motion controls) that interact with the official drivers or APIs of the hardware.
Many of these engines support their own scripting languages or hooks for many things, so it is write once.
For example, take a look at the unreal engine:
http://www.unrealtechnology.com/technology.php
Most of the biggest engines, like Unreal are so flexible and robust that they allow developers to write all kinds of games. For instance, the Unreal engine was used not only for shooters, but also for shooter-RPGs like Mass Effect.
Remember that most of the manpower in making games is devoted to graphics, set designers, audio design, level design, etc., and there are custom editors for all of that. Many of the set pieces are usually programmed via scripting languages. Only a small portion of folks in gaming companies actually write code in low-level languages like C.
The engine takes care of this by providing a platform independence layer.
Things that varies between platforms for instance graphics library. threading, clock and filesystem etc; are implemented for each platform in the game engine.
I can make a call to the engine to render a 3D model consisting of triangles.
The engine in turn render this model by calling down into the platform independence layer which uses an implementation for the currently used platform.
There are two ways game companies do this:
1) Writing/using a multiplatform engine
2) Porting a game
A multiplatform engine will feature abstractions for platform-specific actions (making a Windows API call, rendering in DirectX vs OpenGL, etc etc) so that all of the work can be done once, then built for both machines. Usually it's a matter of writing simple wrapper methods for things like Direct3D calls and what not. Most newer game engines are being built from the ground up with multiplatform support. Others are adding multiplatform support.
If a game engine isn't multiplatform, it has to be converted to run on the target platform. This is usually a two-part operation. First, all of the API calls and interfaces with the hardware need to be redone for the target platform. The second part involves debugging and optimizing the game for performance. Typically a direct port will not perform very well, as the code will feature platform specific optimizations that do not apply to the new target platform.
For various reasons, ported games can suffer from performance issues, usually in spite of watered down visuals. Take a look at The Orange Box on PS3 or CoD: Modern Warfare for the Wii to see two examples of ports gone wrong. For the OB, Valve outsourced the task of porting the game(s) to EA. In the second instance, Activision decided that it made more sense to port a game on an engine designed for more powerful hardware over to a weaker platform instead of building the game on top of a new engine designed to get the most out of the Wii.
Many places will have separate teams responsible for different versions. That is why you always see some small differences. However, if a portable language is chosen, these teams may be able to trade code around.
If the company as produced a game engine, developers can just develop on top of that, leaving the engine to handle the cross platform specifics.
I'm guessing that the art/media department is that same for all platforms.
Actually, there are some frameworks that are meant to be able to run on multiple platforms.
For example:
The XNA Framework can run on Windows, Xbox, and Windows Phone with almost the same code base. (About 90% the same C# code can run on all of the platforms.)
Unity 3D supports PC, MAC, browsers, the iPhone, Wii, and it will soon support Android, too.
There are other such frameworks as well.
Also, most of the popular game engines (eg. Unreal, etc) are ported across multiple platforms.
This is often accomplished with a virtual machine that handles non-time-critical game mechanics and an abstraction layer for time-critical but platform-specific operations.
The particular methods are highly proprietary, secret, and are the among the most valuable assets of the game maker.
I remember reading an interview (or perhaps it was a .plan file/blog) John Carmack gave a few years ago. He was discussing developing for multiple platforms. (If memory serves this was around the time they were releasing titles for mobile platforms) The advice he gave was to always target the platform with the lowest system specs you plan on supporting first. His reasoning was that it is far easier to scale up than down. If you focus on the latest high end graphics you're likely to wind up depending on features only available on the high end. Making it very difficult to scale back for mainstream and lower end systems. Anyway, I thought it was pretty good advice.
This is all a guess because I don't work for a company that makes console games, but speaking from experience as a software developer what I imagine happens a lot of times is that external libraries are used against source code that's written in a common language, such as C++ or something. A lot of the core game code (game loop, physics stuff, etc.) can be used because the syntax is the same with the library across platforms.
However, there is a large degree of code that has to be written (and tested) that is unique to the platform. For example, most (if not ALL) graphics card-related code would have to be different for the Xbox 360 vs the PS3.
This allows for a large degree of portability on core functions and then the UI stuff and graphics-related stuff is platform-specific (not always for the UI).
Also, large game companies have 100s of developers working on a project, so they have a lot more resources than some indie developers might.
It's never perfect though. You always have to port SOME code. Unless you're using Adobe AIR, but your game is for consoles (and who uses Adobe AIR to develop REAL games?)
Game companies use commercial middleware, like RenderWare which do not come cheap. Most game platforms also support a C++ environment for code to get compiled on. Additionaly, most consoles come with a Development version (Playstations do) and there are simulators to test most code on. This middleware is now owned by EA (which is like the giant player on the field). Creating 3D games aren't just framework. Most of the game comes from a design document, which documents the flow of the game and game play. The artwork is done in other software (Maya and Lightwave for example) and the 'models' which are the game characters.
Even though it might look horrific a lot of work, when it comes to coding, it isn't that big of a deal. Writing the core functionality takes a week or eight, the rest is more in design and planning. Just remember that 3D is only 10% of the overall game. These are my two cents as a former game developer.
Not necessarily video game related, but the best walk through I've seen for doing multi-platform software was in GOF (http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612). Read the case study on the windowing system.
I would say "largely they don't." All the money is in Windows or in consoles and a lot of the consoles want an exclusive license. I have seen a few ports but they're always a separate code base branched from a previous version.
Very often they use #define (for example in C++ code) so before the compilation for the specified platform the proper code is included or used. In bigger projects sometimes the parts of a game are totally different and written in different IDE's and compiled in different (platform specific) compilers.
Example from my experience:
When I was working on a game for Nintendo Wii, we were using Torque game engine. We were programming on PC's and compiling code for PC's. When some functionality was ready we used Metrowerks CodeWarrior (with special set of libraries etc.) to compile it for Nintendo Wii, send it to the devkit and then run from the Nintendo Wii console.
I am trying to decide whether it is worth it or not to put in the time to make an app work on Windows 2000 and/or Windows 98, so I'm curious to find out how many people are still using these operating systems.
Thanks, as always.
w3 operating system statistics (win 2000: 0.7%)
w3 counter operating system statistics (win 2000: 0.54%)
Frankly, its going to be a lot of work to support those operating systems, and unless you have specific reason to, you probably shouldn't. You should probably just support XP and up for an "average" windows application, as that won't be difficult to code, and because the next earliest version, win 2003, is below 2% in both of those charts. :D
Also possibly of interest: .net framework installation statistics
deployment rate of the .net framework
I'd say that you should NOT add support for ancient versions of operating systems unless there are customers who are really desperate for it.
People who are still using 9X and 2000 need to be given every encouragement to move to a newer OS platform. It is really for their own good.
When a new version of a framework or language appears (e.g. .NET 3.5, SQL2008), what approach do people take to when to adopt/upgrade?
Generally developers will say as soon as possible (they want it on their CV and from a management perspective giving them what they want provides a motivation boost) but commercially there is often little incentive (few clients demand the latest version) and from a cost perspective (retest, training) there is often a disincentive.
I'm particularly thinking of "on-going" systems and projects (such as in a software house) which exist and evolve over years where taking the "new projects use the new technology" approach doesn't work.
Are people driven by specific requirements (the need to use a new feature, a potential or existing client demanding support for it), do they formally assess it (in which case what are the criteria) or do they upgrade as a matter of routine (in which case when - leading edge vs. bleeding edge)?
Do people think that not being on the latest version of something should be considered technical debt and managed as such?
Or is "if it ain't broke don't fix it" a valid approach?
Read up on Technical Debt. This is a simple cost-benefit decision.
The "if it ain't broke don't fix it" is a common management policy that says "tomorrow's dollars aren't worth as much as today's, so don't plan for future improvements." Eventually technical debt accumulates to the point where the product can no longer limp along.
The most common breaking point is when some piece of the infrastructure is no longer supported. By then, incremental change is impossible.
Reinventing from scratch is a new capital investment. Fixing existing code is an expense. The accounts force management to make technically crazy decisions.
In the case of open source software, it requires careful technical management since there's no official "support sunset" announcement from Oracle/Sun. Bad technical management, of course, leads to technical bankruptcy.
We look at the support lifecycle costs. For how long are the older versions supported, and at what costs? Platforms like Windows and Java tend to move fast as compared to mainframe environments, and part of the cost of doing business on those platforms is to perform periodic upgrades. In a rational world, that is!
New versions can have killer features we need -- but that is rare in enterprise development. The main positive selling points of new versions (as opposed to negative ones such as expired support) tends to be greater developer efficiency, which is hard to measure. Against that, as you indicate, the cost of retraining must be considered, not only for the initial developers, but, crucially, for maintenance. In each upgrade, some applications tend to be left behind as too critical to retire, and too expensive/fragile to upgrade. Over time, the number of platforms and versions you have to support increases overall technical debt (no matter their age).
Another criterion for upgrading to new versions (which you note) is the ability to attract and retain staff. With the current economic phase, that's playing second fiddle, but still cannot be ignored completely. You want to have at least a seasoning of enthusiastic and knowledgeable developers.
I think the killer question is whether your app will survive long term if you NEVER upgrade the platform/language version. If you think it can't, you may as well upgrade sooner rather than later, as it will only become harder.
Think about how long your app should be actively developed until you need a full rewrite. If you never plan to rewrite it, I would upgrade continually. Consider how difficult it will become to find the best developers if you are working in an outdated technology. Consider how new framework/language features could speed up your development process in the long term, for a bit of short term pain.
When you really need to. .NET 1.0 was crappy, 1.1 was a nice upgrade, but Web development with VS2003 was not so smooth. Things improved with VS2005 and .NET 2.0 – and I see still many developers and companies are stick to .NET 2.0. Previous versions were so fresh, version 2.0 was mature tech. So, if you were happy with 1.1, why would you upgrade? If you are happy now with 2.0, why upgrade to 3.5 or 4.0?
When the benefits of upgrading (more features, or a bugfix you need) outweigh the risks/costs involved (new issues, breaking existing code).
When you develop for Microsoft based platforms, like a Windows Forms App for Windows or ASP.NET webapp for Windows Server, the nice time to migrate is for every two major versions of OS.For example, if your app has been developed for Windows 2000, you ought to migrate to Vista though XP can be neglected. Similarly, if it were designed for XP SP2, you can safely ignore Vista and target Win 7. Usually Microsoft never breaks (or rarely breaks) incremental OS updates. So an app running on today's OS will definitely run on the next. But never on the one following it. (It if runs how can M$ make money???)
Source: Self... Windows Developer for over 5 yrs)
I'm in the upgrade as soon as possible camp (though I might wait a month after a new version come out just in case for uncaught issues). There are a few things you need to think about:
1. Security Releases
Many of the people who tell me if it isn't broke don't fix it are also the same people who would close their 2 eyes when security patches get released. Think Equifax.
To me it is an ethical responsibility to at least be on security supported versions of a framework. We owe it to our customers to safeguard their data.
2. Attracting & Retaining Talents
There are lots of talk about how the programming language or framework used doesn't matter. But in my experience, the cleanest code and design for a web app are usually written by the people who are passionate about the framework & programming language used because of their experience & expertise with it.
These people are unlikely to stay around for long or join your company if you stick to a very old version. Please think about your developers' happiness.
3. Newer, simpler ways offered by the newer version
Very often newer versions of a framework make something hard in the past much easier. If we do not upgrade, we miss out on the good new packages/features and we write our code in the old frustrating way knowing there is a much simpler way to achieve the same feature. And when it comes time to upgrade, we may end up having to change again to the new way. So why not upgrade and use the new better way and waste less time?
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Recently I found out that the company a friend of mine co-owns uses 4D, which I've never heard of before. They swear by it, but they're non-technical and what they say about it sounds like memorized marketing blurb. Unfortunately the 4D website also seems devoid of any actual information and is filled with words like "comprehensive", "solution", "platform" and "integrated" instead.
Since that thing is rather expensive and uses a custom language that I don't have much inclination to learn just for one project, I'm cautious about it and I'm wondering if anyone had any experience with it? Would you recommend it? What is it good for? What competitive advantage would I gain by learning it as a programmer, or using it as a company?
4D has been around for a long time (~25 years), so it's much older than e.g. MySQL. Think of it as a professional version of Microsoft Access: It has its own Pascal-inspired host language, its own relational database engine, a very mature IDE for rapid GUI development and a custom runtime which allows for true "write once, run anywhere" (anywhere being Mac OS (X) and Windows, that is). Nowadays, it also understands SQL, there's a server version and even an integrated web server. It's fairly powerful, so the comparison to Access probably does not do it justice.
Today, I believe it's mostly used for legacy apps which are as old as 4D is. I don't think I would learn it again today, much less start new projects with it, since you can get the same functionality and then some by stacking up open source components.
I used to do some very serious 4D work, one of the systems I wrote is still in use as an enterprise system about 16 years later. I got frustrated because they were taking years to come out with the new object-oriented version of the language and I was writing thousands of lines of code to use a third-party table control.
4D delivers cross-platform, very high-performance client-server systems using a proprietary server. The database model is much more set-oriented than SQL and pulls the sets all the way into the core language. It does a nice job of delivering code to the clients because it compiles all procedures to native code which is cached locally and updated on-demand when it is out of date.
The language and GUI environment have their quirks but the flip-side is that there will probably be a good living to be made from supporting it as a legacy platform. if you can get someone else to pick up the tab for the tools, it may be a useful addition to your consulting toolbox. You have to consider how much business-specific code is gonna be out there for a unique product with that long a history!
An engineer for whom I have huge respect was recently hired by 4D which says a lot about their commitment to the future, hiring this kind of guy.
I've been working a lot with legacy systems recently, doing a port from old Mac stuff to WPF and the contrast between the mostly-unused complexity of Visual Studio and old Mac tools reminded me of 4D. I'm also porting my OOFILE C++ database and reporting frameworks to REALbasic - the OOFILE set-oriented operations came directly from what I loved about 4D and this too made me think I was too harsh in this answer originally.
The thing to remember about 4D is that it was set-oriented from the beginning (written by a mathematician) and much easier to use for many things than SQL. The deployment model of 4D Server is a superb combination of desktop app and network provision - compiled components are cached on the server and automatically sent to a client when needed. There's no need to shutdown or actively push or deploy updates. The GUI model of 4D was frustrating but looking at the site today, they have solved most of the issues that I had to use third party solutions for years ago.
Avoid it like the plague. My company uses it and it's just a constant exercise in frustration. It performs no where near as well as the sales pitch would have you believe, and documentation is either non-existent or not helpful.
In my opinion, there is no reason to begin learning 4D unless you want a simple database app and are unable or unwilling to learn how to create GUIs in a bigger language. The main advantage that 4D has is that the built in functionality between the UI and the database can handle most of what is needed. If you want something quick, small, and inhouse, you can get by with 4D but if you need to develop a powerful commercial application you will run into a few walls. If you need something that 4D doesn't provide automatically it will be very difficult to get it working.
I consider the language completely archaic. It works for what it does but our product has become limited by the language and database itself. We keep running into weird quirks and have to code our way around them.
I have experience in 4D 2003 and 2004 but we haven't upgraded to the latest version because of the costs. It is extremely expensive. Each customer needs to buy licenses for each computer that needs to run the software. Our product costs over $1000 for a new office because of the licenses. When a new version of 4D is released every single customer has to pay to upgrade their licenses.
After looking at https://www.4duk.com/products/ataglance.html, I'd recommend you stay clear - it looks like one of those products that's going nowhere.
It reminds me of the time I was made use a development platform called Witango - absolute nightmare to use, and all apps had to be rewritten in .NET very shortly afterwards.
Invest your time learning something more mainstream/employable.
Avoid at all cost. 4D used to be a good Mac database twenty years ago but is obsolete today. Extremely expensive to deploy and poorly supported. I have used it for many years and have since moved to Real Studio for cross-platform database development, which has a more modern language and a far more active developer community.
I'll be wary of investing too much into something like this. On the good side, if that's what your company uses learning it will pay dividends. But the skills you learn will be hard to use in other places.
I think more than half the replies over here are inaccurate. I know of more than 20 companies with over 1000 users. And I believe there are a lot more.
With 4D v12.1 (www.4d.com) you can easily deploy at the click of a button for single-user, client server, Mac, Win. And there are easy to setup plugins for integration with Flex, iPhone and Android OS. Their KB and documentation is very neat and comprehensive.
They have a great engineering team and the support from 4D and the online community is just fabulous. I have been using 4D for several years and I have no complaints.
4D as someone else pointed out gives you a fully integrated backend database and frontend. The client server connections are stateful so you dont need to worry about record handling and client server session handling.
At less than $1000 per year it is not expensive and you can deploy unlimited single user apps. Which other propreitory development platform gives you that?
I am sure Real Software has its Pros and Cons too. There are many choices nowadays and there are many ways to skin a cat.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
With iPhone and Android I feel Symbian is obsolete. But it is going to be open sourced. However the API looks like very different. With so many different types of discriptors, arrays and Active objects people feel creepy about it. Loo in wikipedia articles here:
http://en.wikipedia.org/wiki/Symbian_OS#Developing_on_Symbian_OS
http://en.wikipedia.org/wiki/Active_objects
I think when it goes open, the first thing community should do is cleaning it up. Though its very difficult but I feel its necessary.
The main reason, symbian is going opensource is to become competitive. The main advantage of Symbian is it is very stable with more than a decade of mobile experience. With the strong support of Nokia, and port of Qt it can definitely a major player.
Wikipedia isn't exactly representative.
Symbian OS development basics have recently been boiled down to under 50 pages in http://www.quickrecipesonsymbianos.com
There is an entire ecosystem that knows about the specifics of developing for Symbian OS. The C++ idioms might be a pain to learn but they have a purpose when it comes to using a mobile platform.
There is little technical justification to get rid of them.
Making things simpler for developers is another goal. A very important one, though. That's why many runtimes have been introduced for Symbian Os development. Qt, Ruby, Java, python, OpenC, Flash, NS Basic, .Net...
The customized, open C++ allows developers to add runtimes efficiently.
Each runtime has its own trade-offs to balance performance and ease of use.
Open sourcing will make runtime integration and native c++ development easier for sure but there also is a commercial point to it too. It gets more people interested and the platform compares more favourably to its competitors.
I think it's too early to say whether Symbian going open-source will be a good or bad thing for the OS. The debate over the branding selected for the Symbian Foundation website shows a certain lack of clarity of the role Symbian software will play in the future.
While it's true to say that there is an entire ecosystem that knows about the specifics of developing for Symbian OS, that's pretty meaningless in its own right. After all, there's still an active "ecosystem" that knows how to develop Cobol applications for IBM mainframes.
You need to consider the size of the ecosystem and appreciate that that ecosystem is small given that Symbian OS has been around for over a decade and the software powers in excess of 100 million devices today. Consider then the rate of growth of the ecosystems surrounding the offerings from Google and Apple - Symbian never generated that level of excitement and never saw that sort of growth in developer interest. Of course, we're a decade down the line and you could argue Symbian have done the hardwork and created the landscape in which Google and Apple are now competing. But just because Symbian was first, doesn't make it best and doesn't give it any right to survive.
It is true to state that the Symbian C++ idioms are a pain to learn. However, it is incorrect to suggest that there is no justification for getting rid of them. The justification is the persistent perception, 10 years on, that developing native code for Symbian OS is too hard. Most if not all these painful idioms were design decisions taken over a decade ago and whilst still beneficial on todays mobile devices are no longer essential. Mobile hardware has moved on substantially in the last decade. Symbian OS has not fundamentally changed, at least in terms of the developer offering. Consider now where PCs would be if the hardware had developed as it has, but the software stopped at Windows 3.1 or 95. We almost certainly wouldn't be able to have this discussion in quite this way for starters.
Looking at alternative mobile platforms, consider Android and Maemo. Both are linux based systems. Both use more developer focussed, standard development approaches which leave Symbian OS looking like it's come from another age.
That in itself is not necessarily a problem because as others have noted, Symbian OS supports several runtime environments that make development for mobile devices that happen to run Symbian OS much more approachable for the average developer.
Taking the runtime support to its natural conclusion, the underlying OS becomes irrelevant. A choice made by the device manufacturer based on cost, time to market, quality etc. But the end user doesn't care and in many cases doesn't know what the OS is. Developers then develop for their preferred runtime, rather than write native code.
Of course, we're not at that conclusion yet. We're still travelling the long road. Therefore native code still plays an important part in mobile devices. Hence the ease with which developers can write for a given platform is important - assuming the device manufacturers believe in supporting developer platforms.
So, will open sourcing be good for Symbian? It's difficult to see how open sourcing will be bad for Symbian. But whether it will be good or not depends on the ability of the community to make Symbian OS into the OS the community needs.
there's a move in symbian os towards using more common languages for development, including C, ruby python etc. try thinking of symbian c++ as like WIN32 programming (you're not telling me that's easy!), if you don't want to use it you don't have to but it's the native language and therefore most efficient.