I have a movie of slot machine game play. How to extract only movie frames when the reels are stopped? During spinning game shows fake symbols, which are not part of the game mathematics. Until now I am doing it manually (by screen shots), which takes too much time and it will be nice to be automated.
I know how to do image processing of single images and I get segments of symbols for each reel. Can you suggest me an algorithm with which to connect different segments and to deconstruct original strips? It is like a puzzle solving, but without clear information for the number of pieces and how exactly they match.
I want to detect the intersection of two objects (sprites) in my scene. I don't want the object geometric intersection to cause a collision between the bodies in the scene.
I've created PhysicalBody for both of my object shapes, but I can't find a way to detect the intersection without having both bodies hit each other on impact.
I'm using cocos2d-x 3+ with the default chipmunk engine (which I'd like to stick with for now)
The question is, how do I detect the intersection of elements without having them physically push each other when they intersect.
The answer is very simple (Though it took me 2 days to figure it out)
When contact is detected and onContactBegin() is called, when the relevant shape is being hit returning false will stop the physical interaction.
I am creating an RTS game in flash, AS3 for the Epic Flash Game design Contest http://www.youtube.com/watch?v=bpFBraUbHyo&list=UUfkxvxrvpNxXvdKusYS0NfQ&index=1
Am almost done, except that creating the class which manages all the sounds is being quite a pain.
Basically there are only 32 available SoundChannels for AS3 before the buffer overflows. But unfortunately, my RTS handles several dozens of units fighting at the same time, and each unit, especially rifle soldiers fire multiple shots at a time.
If I let each sound effect be sounded, the buffer would overflow, even if it does not, it would sound very noisy and messy.
so the question is, I have seen games like starcraft in the market where there are hundreds of units on screen, yet the sound is pretty "unnoisy" and organised. I would like to ask how those people achieved this effect? What sounds do they accept or filter out?
Currently I have 3 possible models:
1)First in, first out model: Accept all sounds being played, but as soon as the buffer limit is reached, the earliest sound in the buffer is silenced.
2) Accept or reject model: Accept all sounds until the buffer overflows, then reject all further plays until sounds end and the buffer empties.
3) Loudest only model: Currently my game has a variety of sounds of different loudness, for example, explosions are louder than gunfire effects. In this model, the loudest 32 sounds are being played, if a sound enters which is among the top 32, the lowest of the 32 is "kicked" and the sound replaces it.
Which model is best, or perhalps you can suggest your own model =p.
maybe also consider using different soundfiles for "single" vs "mass" events...
1 space ship - play "single spaceship sound"
2 space ships - play "single spaceship sound" twice
3 or more space ships - play "many space ships sound"
...grouping the sounds in the buffer by type might be a good idea anyway, as you easily could silence one "space ship sound" if there are too many of those, without silencing other elements.
Take a look at this scenario, I have two characters, one shoots two bullets on the direction of the other character, the bullets are fired instantly and travel at infinity speed, how to detect a collision?
Here's an image to illustrate the problem:
The red bullet would clearly miss, but the green bullet would hit, how to perform this kind of collision test?
This type of collision test is called ray casting. Its implementations can vary from simple to very complex, depending on your specific application and how much time you're willing to invest into performance gains. Definitely search online for the topic if you're interested, or pick up a game programming book. It's a common operation for 3d games.
If you know that there will only ever be 2 bullets, then you can solve this with just a distance check between the ray created by the fired bullet and the other bullet. If the distance is less than the summed radius of the bullets then you know they've hit.
If you're making some sort of game engine where many bullets will be moving, then the simplest way that I can think of accomplishing this is to move the bullet along the ray that it is fired from (by normalizing the bullet's movement vector) in small increments (no larger than the bullet's radius) and perform collision checks at each step.
No matter what ray casting method you end up using, it will be tightly integrated with whatever system you're using for spacial partitioning. There's no way to avoid querying many spacial locations when you're ray casting, so be sure that you use an effective space partitioning system for your purposes.
I'm designing a game for the first time, but I wonder on what game time is based. Is it based on the clock or does it rely on frames? (Note: I'm not sure if 'game time' is the right word here, correct me if it isn't)
To be more clear, imagine these scenarios:
Computer 1 is fast, up to 60fps
Computer 2 is slow, not more than 30fps
On both computers the same game is played, in which a character walks at the same speed.
If game time is based on frames, the character would move twice as fast on computer 1. On the other hand, if game time was based on actual time, computer 1 would show twice as much frames, but the character would move just as fast as on computer 2.
My question is, what is the best way to deal with game time and what are advantages and disadvantages?
In general, commercial games have two things running - a "simulation" loop and a "rendering" loop. These need to be decoupled as much as possible.
You want to fix your simulation time-step to some value (greater or equal to your maximum framerate). Complex physics doesn't like variable time steps. I'm surprised no-one has mentioned this, but fixed-time steps versus variable time steps are a big deal if you have any kind of interesting physics. Here's a good link:
http://gafferongames.com/game-physics/fix-your-timestep/
Then your rendering loop can run as fast as possible, and render the output of the current simulation step.
So, referring to your example:
You would run your simulation at 60fps, that is 16.67ms time step. Computer A would render at 60fps, ie it would render every simulation frame. Computer B would render every second simulation frame. Thus the character would move the same distance in the same time, but not as smoothly.
Really old games used a frame-count. It became fairly obvious quickly that this was a poor idea, since machines get newer, and thus the games run faster.
Thus, base it on the system clock. Generally this is done by knowing how long last frame took, and using that number to know how much 'real time' to go through this frame.
It should rely on the system clock, not on the number of frames. You've made your own case for this.
The FPS is simply how much frame the computer can render per second.
The game time is YOUR game time. You define it. It is often called the "Game Loop". The frame rendering is a part of the game loop. Also check for FSM related to game programming.
I highly suggest you to read a couple of books on game programming. The question you are asking is what those book explain in the first chapters.
For the users of each to have the same experience you'll want to use actual time, otherwise different users will have advantages/disadvantages depending on their hardware.
Games should all use the clock, not the frames, to provide the same gameplay whatever the platform. It is obvious when you look at MMO or online shooter games: no player should be faster than others.
It depends on what you're processing, what part of the game is in question.
For example, animations, physics and AI need to be framerate independent to function properly. If you have a FPS-dependent animation or physics thread, then the physics system will slow down or character will move slower on slower systems and will go incredibly fast on very fast systems. Not good.
For some other elements, like scripting and rendering, you obviously need it to be per-frame and so, framerate-dependent. You would want to process each script and render each object once per frame, regardless of the time difference between frames.
Game must rely on system clock. Since you don't want your game is played in decent computers in notime!
Games typically use the highest resolution timer available like QueryPerformanceCounter on Windows to time things. Old games used to use frames, but after you could literally run faster in Quake by changing your FPS, we learned not to do that anymore.