Should I choose Cardboard SDK or Oculus Sdk? - oculus

I'm new to VR development, I am a bit confused what's the differnce and the relationship between Cardboard Sdk and Oculus Sdk, if I want to develop an App which can play 360 VR video or photos, then which one is better I should to choose?

By Oculus SDK, I assume you mean the mobile SDK for GearVR since you mention cardboard. If you're talking about the SDK for PC, then the question is Oculus vs SteamVR vs OpenVR vs Morpheus :)
The major choice for which to develop for I think probably comes down to what your timeline and audience is.
GearVR is the best quality device out there right now and it is SIGNIFICANTLY more polished than cardboard, and it requires specific expensive hardware (Note 4 or S6, soon the Note 5). It has a store that people are buying things off of (even if it's not much yet). But since GearVR apps in development need to be signed, you will only have an audience if you can commit to at least a demo that will be accepted on the Samsung store. (the alternative is to have every user use the developer-signing system, which means you'll get tens of people instead of thousands to see it probably)
Cardboard is a very short-term experience. There are no head-straps on cardboard for a reason - it's intended to be something you hold up for only a minute or two at a time. Most of the audience is people interested in tech demos, but many more people will be able to try out your app. Google is working on stuff behind the scenes, so there may be more meat on it in the future - a non-cardboard VR device I've heard rumors of, and they're pushing cardboard pretty hard for classroom experiences. And in a couple years, every phone MIGHT have sensors good enough to give a GearVR-level experience.
Both SDKs will give you the basic two-eye 3d stereo rendering framework. Oculus' is a little more fleshed out with some built in scene loading (it converts from FBX format which is made by MODO, which is expensive) and a UI library (I'm not very happy with it though).
Either way, most of the work you do will likely be independent of the SDK you use, so I don't think you'll be boxing yourself in whichever one you choose.

Now that unity natively support Virtual Reality you can use both of them in your project but it is a bit tricky.
Have a look at this tutorial that show how to compile for both the Cardboard SDK and the Unity Virtual Reality Supported option:
https://github.com/ludo6577/VrMultiplatform

If it's a web based project a-frame might be all you need.

Related

Mobile platform (android?) for developing apps (offline website)

Some years ago I dipped my foot in the water of developing WML websites and J2ME apps - and found it a rather unpleasant experience.
Hearing stories about developers making $$$ in their free time, writing trivial apps for iphone and android, and having a (top secret - don't tell anybody) idea for an app that everyone will immediately rush out and buy, I thought I'd have a look at the current state of play regarding development tools - however while there are no end of people pushing branded products, its often unclear what the programming language is like and what integration it provides with mobile devices.
I could develop most of the functionality as an online website - but for reasons of confidentiality and the ridiculous cost and low speed of mobile internet connections, it makes a lot of sense to deploy most of the functionality client-side.
Google gears like the ideal tool for implementing this - but Google have pulled the plug on the project.
The reasons I liked GG were:
html rendering (there will be a lot of content in the app)
a standard programming language (javascript)
integration with geolocation
If it had supported the accelerometer and bluetooth it would have been perfect!
Looking around at other approaches, I see that standard Android apps are developed using Java. While I'm not a big fan of the language, I could stretch a point in this case - but what about all the content rendering? Is there an off-the-shelf html renderer for android which I could then build my own handler for?
(if you're getting the impression that I'm something of a programming snob - you're probably right)
I had a quick look at Appcelerator - which has lots of pages telling me how wonderful it is - but I've yet to see any details of how it works, what the language looks like, how it integrates with hardware on the client, how to produced a packaged app for resale....
Any suggestions for a suitable toolkit/platform?
TIA
Yes google gears is deprecated but so what? As they clearly state they intend to continue with the product until a suitable replacement is found (AKA HTML5). Just be sure to write your application with a migration path to HTML5 in mind and you're sweet.
Besides, its open source... So if you need something added or changed the code is all there.
I am currently in the process of gambling my entire future on the google gears platform. Don’t forget that they currently use it in GMail so I don’t see them stopping at least basic continuing development on the platform.

A versatile yet simple gaming development platform for a rather beginner?

I am interesting in game development. However, I am not sure what platform to choose. There are a few different platforms I have been considering so far:
Microsoft XNA
Games only work in Windows and Xbox?
JavaScript and WebGL
Bad performance. This is mainly due to JavaScript -- the language is essentially synchronous and even timers do not run asynchronously. The only good way to use JavaScript would be to utilize Web Workers, which complicate the development quite a lot.
Flash
A dying technology that I personally dislike and unsupport.
C++ and OpenGL
Cross-platform compliant all the way, but very hard to develop games.
Am I missing anything worth considering? What I am looking for is a simple yet enough powerful to make 2d and basic 3d games and being able to run it on as many platforms as possible.
Also, is it possible to run XNA games on Linux/Mac? What about mobile?
You should probably look at a framework that allows the use of Java/C++ but takes away some of the pain.
For C++ take a look at Ogre.
For Java take a look at jMonkeyEngine.
If you're going to be targeting mobile devices incl. iPhone/iPads too look at something like Unity/Unity Pro which supports JavaScript, C# and a dialect of Python and can publish out to multiple platforms.
You'll get better answers in https://gamedev.stackexchange.com/
If you don't mind spending some money, you might want to look at Torque.
For anything advanced you'll need to use C++, but for simple games, the TorqueScript is fine. They currently support Windows, Mac and iPhone/iPad, athough the Mac and iPhone/iPad support is usually less than the Windows support. But still pretty good for most things.
You can also publish the PC games to the web browser with their ActiveX and NP browser plugins.
They also support some consoles. For XBox they have a version of the engine that is built on top of XNA, and you can also get a version that is built on top of the native XBox SDK. I believe they've also gotten it going on PS3 as well. For the XBox and PS3 native stuff, you're going to be looking at some real money though.
WebGL and Javascript and canvas are getting a lot faster now, thanks to typed arrays and native animation support and hardware rendering, see for example:
https://hacks.mozilla.org/2010/08/more-efficient-javascript-animations-with-mozrequestanimationframe/
(download the nightly version of firefox minefield to try it out)
There are various webgl game frameworks available already (see the 'learning webgl' site for info).
It's not going to work on mobile/tablet platforms though probably for a good while.

How do game companies handle programming for multiple platforms?

You often see that a new game will be released on Xbox 360, PS3 and Windows PC.
How do gaming companies do this? Is it a common source code compiled using different compilers? Are different source codes actually required?
There's an example of this phenomenon described in this news article.
Generally speaking, the vast majority of multiplatform "triple-A" titles are implemented on top of an engine such as Unreal, Source, or other smaller engines. Each of these engines is custom-implemented and optimized for each platforms and may use a lower-level API such as DirectX/OpenGL which in turn uses the console. Each of these engines also has plug-ins for platform specific stuff (e.g., motion controls) that interact with the official drivers or APIs of the hardware.
Many of these engines support their own scripting languages or hooks for many things, so it is write once.
For example, take a look at the unreal engine:
http://www.unrealtechnology.com/technology.php
Most of the biggest engines, like Unreal are so flexible and robust that they allow developers to write all kinds of games. For instance, the Unreal engine was used not only for shooters, but also for shooter-RPGs like Mass Effect.
Remember that most of the manpower in making games is devoted to graphics, set designers, audio design, level design, etc., and there are custom editors for all of that. Many of the set pieces are usually programmed via scripting languages. Only a small portion of folks in gaming companies actually write code in low-level languages like C.
The engine takes care of this by providing a platform independence layer.
Things that varies between platforms for instance graphics library. threading, clock and filesystem etc; are implemented for each platform in the game engine.
I can make a call to the engine to render a 3D model consisting of triangles.
The engine in turn render this model by calling down into the platform independence layer which uses an implementation for the currently used platform.
There are two ways game companies do this:
1) Writing/using a multiplatform engine
2) Porting a game
A multiplatform engine will feature abstractions for platform-specific actions (making a Windows API call, rendering in DirectX vs OpenGL, etc etc) so that all of the work can be done once, then built for both machines. Usually it's a matter of writing simple wrapper methods for things like Direct3D calls and what not. Most newer game engines are being built from the ground up with multiplatform support. Others are adding multiplatform support.
If a game engine isn't multiplatform, it has to be converted to run on the target platform. This is usually a two-part operation. First, all of the API calls and interfaces with the hardware need to be redone for the target platform. The second part involves debugging and optimizing the game for performance. Typically a direct port will not perform very well, as the code will feature platform specific optimizations that do not apply to the new target platform.
For various reasons, ported games can suffer from performance issues, usually in spite of watered down visuals. Take a look at The Orange Box on PS3 or CoD: Modern Warfare for the Wii to see two examples of ports gone wrong. For the OB, Valve outsourced the task of porting the game(s) to EA. In the second instance, Activision decided that it made more sense to port a game on an engine designed for more powerful hardware over to a weaker platform instead of building the game on top of a new engine designed to get the most out of the Wii.
Many places will have separate teams responsible for different versions. That is why you always see some small differences. However, if a portable language is chosen, these teams may be able to trade code around.
If the company as produced a game engine, developers can just develop on top of that, leaving the engine to handle the cross platform specifics.
I'm guessing that the art/media department is that same for all platforms.
Actually, there are some frameworks that are meant to be able to run on multiple platforms.
For example:
The XNA Framework can run on Windows, Xbox, and Windows Phone with almost the same code base. (About 90% the same C# code can run on all of the platforms.)
Unity 3D supports PC, MAC, browsers, the iPhone, Wii, and it will soon support Android, too.
There are other such frameworks as well.
Also, most of the popular game engines (eg. Unreal, etc) are ported across multiple platforms.
This is often accomplished with a virtual machine that handles non-time-critical game mechanics and an abstraction layer for time-critical but platform-specific operations.
The particular methods are highly proprietary, secret, and are the among the most valuable assets of the game maker.
I remember reading an interview (or perhaps it was a .plan file/blog) John Carmack gave a few years ago. He was discussing developing for multiple platforms. (If memory serves this was around the time they were releasing titles for mobile platforms) The advice he gave was to always target the platform with the lowest system specs you plan on supporting first. His reasoning was that it is far easier to scale up than down. If you focus on the latest high end graphics you're likely to wind up depending on features only available on the high end. Making it very difficult to scale back for mainstream and lower end systems. Anyway, I thought it was pretty good advice.
This is all a guess because I don't work for a company that makes console games, but speaking from experience as a software developer what I imagine happens a lot of times is that external libraries are used against source code that's written in a common language, such as C++ or something. A lot of the core game code (game loop, physics stuff, etc.) can be used because the syntax is the same with the library across platforms.
However, there is a large degree of code that has to be written (and tested) that is unique to the platform. For example, most (if not ALL) graphics card-related code would have to be different for the Xbox 360 vs the PS3.
This allows for a large degree of portability on core functions and then the UI stuff and graphics-related stuff is platform-specific (not always for the UI).
Also, large game companies have 100s of developers working on a project, so they have a lot more resources than some indie developers might.
It's never perfect though. You always have to port SOME code. Unless you're using Adobe AIR, but your game is for consoles (and who uses Adobe AIR to develop REAL games?)
Game companies use commercial middleware, like RenderWare which do not come cheap. Most game platforms also support a C++ environment for code to get compiled on. Additionaly, most consoles come with a Development version (Playstations do) and there are simulators to test most code on. This middleware is now owned by EA (which is like the giant player on the field). Creating 3D games aren't just framework. Most of the game comes from a design document, which documents the flow of the game and game play. The artwork is done in other software (Maya and Lightwave for example) and the 'models' which are the game characters.
Even though it might look horrific a lot of work, when it comes to coding, it isn't that big of a deal. Writing the core functionality takes a week or eight, the rest is more in design and planning. Just remember that 3D is only 10% of the overall game. These are my two cents as a former game developer.
Not necessarily video game related, but the best walk through I've seen for doing multi-platform software was in GOF (http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612). Read the case study on the windowing system.
I would say "largely they don't." All the money is in Windows or in consoles and a lot of the consoles want an exclusive license. I have seen a few ports but they're always a separate code base branched from a previous version.
Very often they use #define (for example in C++ code) so before the compilation for the specified platform the proper code is included or used. In bigger projects sometimes the parts of a game are totally different and written in different IDE's and compiled in different (platform specific) compilers.
Example from my experience:
When I was working on a game for Nintendo Wii, we were using Torque game engine. We were programming on PC's and compiling code for PC's. When some functionality was ready we used Metrowerks CodeWarrior (with special set of libraries etc.) to compile it for Nintendo Wii, send it to the devkit and then run from the Nintendo Wii console.

Client-side image processing

We're building a web-based application that requires heavy image processing. We'd like this processing load to be on the client as much as possible and we'd like to support as much platforms (even mobiles) as much as possible.
Yeah, I know, wishful thinking
Here's the info:
Image processing is rasterization from some data. Think like creating a PNG image from a PDF file.
We don't have a lot of server power. So client-side processing is a bit of a must.
So, we're considering:
Flash - most widespread, but from what i read has lackluster development tools. (and no iPhone/iPad support for now).
Silverlight - allows us to use .NET CLR, so a big ++ (a lot of code is in .NET). But is not supported for most mobiles ( rumored android support in the future)
HTML5 + Javascript - probably the most "portable" option. The problem is having to rewrite all that image processing code in Javascript.
Any thoughts or architectures that might help?
Clarification: I don't need further ideas on what libraries are available for Silverlight and Javascript. My dilemma is
choosing Silverlight means no support for most mobiles
choosing Flash means we have to redevelop most of our code AND no iPhone/iPad support
HTML5 + Javascript we have to redevelop most of our code and not fully supported yet in all browsers
choosing two (Silverlight + Flash) will be too costly
Any out-of-the-box or bright ideas / alternatives I might be missing?
This is the sort of issue that software architects run up against all the time. As per usual, there is no ideal solution. You need to select which compromise is most acceptable to your business.
To summarise your problem, most of your image processing software is written in .NET. You'd like to run it client-side on mobile devices, but there is limited .NET penetration on mobiles. The alternatives with higher penetration (eg. Flash) would require you to re-write your code, which you can't afford to do. In addition, these alternatives are not supported on the iPhone/iPad.
What you ideally want is a way to run all your .NET code on most existing platforms, including iPhone/iPad. I can say with some confidence that no such solution currently exists - there is no "silver bullet" answer that you have overlooked.
So what will you need to compromise on? It seems to me that even if you redevelop in flash, you are still going to miss out on a major market (iPhone). And redeveloping software is extremely costly anyway.
Here is the best solution to your problem - you need to compromise on your "client side execution" constraint. If you execute server side, you get to keep your existing code, and also get to deploy to just about every mobile client, including the iPhone.
You said your server power is limited, but server processing power is cheap when compared to software development costs. Indeed, it is not all that expensive to outsource your server component and just pay for what you use. It's most likely that your application will only have low penetration to start off with. As the business grows, you will be able to afford to upgrade your server capacity.
I believe this is the best solution to your problem.
Host you image processing on Amazon E2C, Azure, or Google. IIRC E2C has many common image processing problems packaged and all ready to go.
Azure probably more familiar ground in term of sharing code as a web service
You just pay for CPU cycles and transfers/storage etc
I'm sure there will be Silverlight and JS people posting examples. Here are some image editors written in actionscript:
Phoenix
PhotoshopExpress
There is an ImageProcessing library to start with.
Plus PixelBender is available in Flash Player 10, it's fast, it runs in a separate thread
and people do some pretty mad things with it.
HTH
Some help for the Silverlight part:
There is an Silverlight image editor called Thumba.
And Nokola recently made one called EasyPainter and he will also provide the source code in the furure.
For the image conversion I would recommend the open source library ImageTools that also includes some basic effects.
Silverlight has a class for pixel manipulation of bitmaps called WriteableBitmap. The open source library WriteableBitmapEx is a collection of extension methods for Silverlight's WriteableBitmap. The WriteableBitmap API is very minimalistic and there's only the raw Pixels array for such operations. The WriteableBitmapEx library tries to compensate that with extensions methods that are easy to use like built in methods.
Pixel Shaders can also be used to make some fast and advanced effects. Although they are limited by Shader Model 2 shaders can be used for fast bluring, tinting and such things.
DISCLAIMER: I consider myself as an advocate of the Flash platform. I admire Silverlights huge potential as a technology to deploy almost any .NET content through the browser, but it has low penetration, is horribly marketed and -although perceived as such by many (mostly people who don't know either Flash or Silverlight)- is no competitor of Flash, as much as Flash is no competitor of Sliverlight. The idealist in me loves the idea of doing everything in HTML+JS using a standard, instead of relying on 3rd party proprietary software. But the truth is, JS is slow and the API is limited, and implementations of JS, HTML and CSS are terribly inconsistent accross browsers.
If you really wanna stick to .NET and are so interested in targeting the iPhone and its siblings, then you might wanna check out MonoTouch.
Still, even though this may surprise you, I am going to tell you to use Flash. :)
Why? The image processing bit is the smallest part of your application. Whatever it is you are writing, I am very sure of that. I don't know about Silverlight, but in Flash the filters used by "Thumba" and "EasyPainter" can be created within a day, most of them simply using ConvolutionFilter, ColorMatrixFilter, DisplacementMapFilter and BitmapData::paletteMap or even simply by applying one of the other filters Flash offers out of the box. Any additional things can be created using PixelBender, which was pointed out by George. The kernel language is a subset of C, so porting classic filters shouldn't be too time consuming. Also alchemy (an LLVM backend targeting Flash Player 10) would be an option worth investigating, although it's not very stable yet.
The biggest part of your app will be a lot of GUI design, GUI implementation, Business Logics etc. Flash is really great when it comes to simple, yet reasonably fast image manipulation and with the Flex framework and MXML you have a powerful tool to productively create the GUI of your app, that can interoperate very well with a multitude of server solutions for virtually any platform.
Also, Flash has a great and active community, offering tons of tutorials, code snippets, libraries and frameworks, and a big ecosystem, with cross-compilation tools to deliver flash content to other platforms (including the upcoming Flash CS5, or the mentioned Elips). I don't understand, where you got the impression, that the Flash platform lacks developement tools. The difference to the .NET suite is that they are provided by a multitude of vendors. The upcoming Flash Player 10.1 was already pointed out by George, but never the less, I wanted to stress, that this makes many of the cross-plattform considerations obsolete.
Last but not least, I'd like to point out Haxe. It allows compiling to SWF, but also to C++, using the very same API provided by NME, to target the iPhone. Also there's work in progress on an android backend. If you're aren't playing to launch within the next 4-5 months, then this is definitely an option.
Your issue is a perfect target for the Haxe programming language. Haxe is written for the web and can compile to JavaScript, Flash and Objective-C (possibly Java/.NET soon).
So you do not choose which platform you are going to invest in but in which language. Haxe is easily adoptable for an AcitonScript programmer.
It makes no sense to run your imageprocessing algorithms in a JavaScript sandbox when Flash is available because it will be much faster. It makes also no sense to run heavy image processing algorithms on a mobile device like the iPhone with JavaScript. I would only support JavaScript as the worst fallback solution.
If you do not like to use Haxe I would go with Flash. You can deploy your Flash application for the iPhone aswell if that is your problem. This is also very great because you get native ARM code. There are actually great tools for professional Flash development available. FDT and IntelliJ IDEA are two of them. The best Haxe IDE is probably FlashDevelop at the moment of writing.
So I would definitly not use JavaScript as the only solution. Haxe is perfect for what you try to achieve. If you do not trust or do not want to invest in Haxe you can use Flash because of the iPhone/iPad export.
Depending on your usecase I would also encourage you to look at cloud hosting like Amazon EC2 and Google AppEngine for instance. Hosting costs are cheap and scaling will be easy for your task. The experience will be much better when it comes to complex operations that can take even a lot of time on a desktop system.
In addition to other answers, another option may be a hybrid solution. For example, use Flash/Silverlight for the majority of your target audience and use server-side processing for those that don't support it (or you could create a native app for iP[hone|ad])
You may have to do something like this anyway as the mobiles you are targetting may have insufficient processing power depending how complex your image processing gets.
Of course you still have the option of upgrading your server which, although you've currently discounted, is probably far cheaper than spending development time creating/deploying/testing a client-side solution.
You can use Silverlight for all Silverlight enabled clients and for non Silverlight clients, do the image processing server side. Since the Silverlight code is C#, you can double compile it to make (mostly)the same code work as Silverlight and non-Silverlight (i.e. server). This gets you the best of both worlds.
You don't say what language "all that code" you'd have to rewrite is in. Might a semiautomated translation to Javascript be practical?
Perhaps you could start out server-side, as CraigS suggests, and then move functions into the client over time instead of rewriting all at once.
Have you checked the editor of Pixlr.com ?
Take a look at their API as well..
The best solution is to use silverlight (so you already have the code ready). If the client can't run it (mobile phones, etc) then process it server-side.
It's the best compromise.
Depends on the type of image processing and the end user experience you are targeting.
As you are looking to target mobile phones your image processing will need to take into consideration the type of handset the user or the receipient has (if messaging via SMS/MMS), as different handsets have different resolution screens and handle different image formats for main images and thumbnails.
I'd suggest that you consider a hybrid cloud architecture as was mentioned in the Microsoft PDC keynotes this year. This would enable you to have your own server(s) to support your application, but if you require additional capacity due you scale out into the cloud using AppFabric.
Additionally, to maximise the market availability of your product pulling the image processing to a common reusable infrastructure allows you to target different platforms, exploiting the positives in each.
I have worked on a solution that hosted its image processing and delivery infrastructure server side and then built different UI offerings allowing sales via desktops, MNOs and AppStores. It can work and from a business perspective can offer economies of scale benefits.
Why not mention Java Applet ?
Good sides are:
almost all browser support ?
need install JRE ?
all OS support
Java provide Java Advanced Image kits, but if c++ dll can be called, that is best (JNI can call c++ dll )
In Python, one of the most popular libraries for image processing is pillow. Through the pyodide project (python running inside browser via emscripten), it's possible to use libraries like pillow and numpy for image (or matrix) processing, and convert the output to a base64 string (via Python standard library). This can then be passed to your <img> html element, either native JS document or with a library like React.
The way I see it, there's no one solution that meets all of your needs. Your best option, imo, is to go with Flash and hope that Adobe sets an agreement with Apple to get Flash on the iPhone/iPad. The major downside, of course, is you'll have to rewrite much of your code.
If the mobile sector isn't absolutely critical, then choose the Silverlight option for reasons you mentioned already. You could also use Silverlight in an out-of-browser mode to work as a desktop application.

questions about game development from web developper's point of view

i've been making web app's and working with various server side language like php, ruby, perl for a while now. I've always been curious about game development, it's actually what I set out to do but I ended up in web development. I am trying to transition in to GD, but I cannot help see games from a web development POV.
GD = Game Development
WD = Web Development
Technical Questions.
How do you design UI in games? in WD you have CSS, and need minimal graphics to create a quick menu. are there similar tools or concepts in GD ?
How do you deal with storing data ? Do you use flat text files? Or is there something like MySQL or sqlite that you use to store information about objects, users, and etc ?
What game engines is commonly used ? Are there any that use scripting languages ? I only know VB and basic understanding of C.
With the proliferation of Iphone and Android, is J2ME being phased out for mobile phones ?
open 3D web is coming. What is your thoughts on having 3d applications running natively from your browser ?
What tools make it easy for creating 3D objects, levels, game environment, and animating characters and so on ?
Where can I find out more about how server/client, client/client, and MMORPG networking works ?
Where can I get or find generic or commonly used game flows ? for multiplayer ?
How do you deal with physics? Is there freely available algorithm or library that you can use ?
How are real time cutscenes made in games ?
Market Questions.
Which market should you enter? Mobile, iphone, wii, PSP, DS, android , ps3, PC etc.
Shouldn't you always enter mobile market, as it is easy to make small games on your own yet sell a lot ? Are there any resources where i can find more about each markets ?
What is your thought on Steam content distribution ? Is it the distribution model of the future ? Whats wrong with the traditional publisher/distributor model ? How does the traditional model work exactly ?
How big is the web games market? ex) Flash games.
How is game development different from any other software development or web development ?
I have a lot more....but those are the ones that I have been thinking about lately.
Thank you very much for reading !
UI Development
Depends on the game- is it animated, or a board-style game? Generally, UI assets are created as images, sprites, or storyboards.
Data
Again, depends on the game type. Realtime games need FAST access, so you want to store your data in a local database and cache it as much as possible. Local file-based databases tend to be the norm, either custom or off-the-shelf, such as SQLLite.
Engines
There are tons of engines out there for 3D, board, etc. Popcap has made their C++ game engine open source. Others include Unity, Ogre3D...
J2ME
I wouldn't target this platform for games.
Don't know much about "Open 3D Web" but it sounds very browser-dependent, so mileage may vary across browsers.
You can play with 3D with Google Sketchup and Caligari Truespace. Truespace was bought by Microsoft and made free.
Again, tons of engines out there for networking. Example: Microsoft's XNA framework has some networking bits you can leverage.
Not sure what you mean there.
There are physics engines built into some of the gaming engines I've mentioned, and external ones you can use.
Once upon a time, realtime custscenes were pre-rendered with 3D Studio Max or Maya. These days in-game rendering is often good enough for cutscenes: look at the latest Halo 3:ODST game. All cutscenes use the in-game engine.
Market
I looked into game development earlier this year. Casual games look to me like a growth industry- high volume, relatively low development cost. Big Fish Games for the PC is a good example there- they publish a few titles and resell most.
I think mobile game development is a huge potential market but the barriers to entry are high because it will be a crowded space. iPhone games are the 800lb gorilla but Android is coming up. PSP and others have a limited audience and are notoriously difficult.
The most important thing I learned in my research is that game development is a labor of love. It's hugely multi-disciplinary: you need programming, art, concept, production. It's more like making a movie than anything else. It's also rough to make a profit because of all that overhead. If you want to get into it, I recommend joining a game developer to learn the business. Once you have experience you can carry it forward to larger gigs at larger publishers. Eventually you can get to work on a major AAA title, after which you can really write your own ticket.
I'll stick to answering the technical questions:
1 - UIs are usually completely bespoke, with nothing resembling a standard in the same way that HTML/CSS is a web development standard. Some people use ScaleForm which is based on Flash but that is by no means common.
2 - Data is often stored in flat files - rarely text, more commonly binary. Again, these are almost always completely bespoke formats. Sometimes they are aggregated into archive files which use the zip format or something similar however. Occasionally, some programs might use sqlite, and online games often use SQL databases.
3 -There are many game engines used, although the definition of 'common' is vague. There are well-known ones like the Unreal or Source engines, down to lesser known ones like Panda3D or Torque. Some of these are heavily focused on 3D and leave much of the rest of the functionality to other packages (or the game developer themselves). Most are able to be used with scripting languages, or come with one built-in. (eg. UnrealScript).
4 - J2ME - couldn't say, that's not the sector I work in.
5 - 3D web will be interesting when it's ready, but cutting edge games currently require gigabytes of client-side data. Running the application in the browser doesn't get around that download, so it's not a great benefit. Nor is it likely to be as high performing as a dedicated 3D game renderer for quite some time. So while it opens many doors, it doesn't significantly change the state of play for gaming just yet.
6 - 3D art assets are usually made with 3D Studio Max or Maya, although there are several other related tools.
7 - MMORPG networking firstly requires understanding of basic networking (ie. strip away all the HTTP fluff and get right down to the socket level). Start with Beej, work up. From there, you're best off reading talks given at conferences and reading the Massively Multiplayer Game Development books, coupling that with anything you can find on traditional game networking. 2 good starting points are the Source Multiplayer Networking docs, and Gaffer's Networking for Games Programmers. Don't expect to understand everything the first time you read it, either. And bear in mind this is a field with ongoing research and the problems are far from solved yet. And that it's also a field where "if you have to ask, you can't do it yet". Emphasis on yet.
8 - I don't know what you mean by game flow - it's not a term I've heard used before.
9 - There are several physics libraries available, including Havok, ODE, Bullet, PhysX, Box2D, etc. Some are free, some are not. You can also write your own physics for simple games, as it's not all that hard, and indeed that is what everybody did until relatively recently.
10 - Real time cutscenes are typically either pre-animated in something like 3D Studio Max, or scripted to run within the game engine.
It depends very much on the platform you are developing for. some game engines, or platforms, have built in platform specific means of creating UI systems. An example is developing for the 360 where there is a proprietary UI system provided with the SDK tools.
However, systems like these tie you to a particular platform and this can be undesirable.
Another alternative is cross platform libraries like Scaleform, which provide game-side libraries for displaying UI elements, and a common way of editing and creating UI systems across different platforms.
The complexity of UIs in videogames varies wildly. Look at something like Peggle, compared to something like Codemaster's Dirt or EA's Dead Space. Each system is therefore implemented differently.
Some use 3D packages and the standard game engine to animate and render UIs. Others implement Flash. Others roll their own custom solutions. There's no easy choice or a standard like CSS I'm afraid!
Hope this helps,
-Tom