Why am I missing frames while recording with Flash? - actionscript-3

I'm recording video from a webcam with Flash, saving it on an Adobe (Flash) Media Server
What are all the things that can contribute to a choppy video full of missing frames? What do I need to adjust to fix this, or what can contribute to the problem?
The server is an Amazon Web Services Medium (M1) EC2 instance. 2 ghz processor, with 3.75gb RAM Looking on the admin console for AMS, The server is never getting maxed out in terms of RAM or CPU percentage
Bandwidth is never over 4mb.
The flash recorder captures at 320x240 at 15fps
I used setQuality(0,100) on the camera. I can still make out individual "pixels" when viewing my recording, but it isn't bad

Server has nothing to do with this. If the computer running the Flash file can't keep up, then you get dropped frames. You basically have 1000/stage.framerate ms to run every calculation for every frame. If your application is running at 30 fps, that is roughly 33ms per frame. You need to make sure everything that needs to happen on each frame can run in that amount of time, which is obviously difficult/impossible to do with the wide range of hardware out there.
Additionally, 15fps itself is too low. The low-end threshhold for what the human eye can perceive as motion is around 24 fps. So at 15fps, you will notice choppiness. Ideally, you want to record at 30fps, which is near the is about where the human eye stops being able to distinguish individual frames.

Related

Is it possible to disable the Jitter Buffer in WebRTC (Chrome/Chromium)

I am trying to reduce the Chromium WebRTC video delay as much as possible for a remote machine control application. Since the transmitting and receiving PCs are directly connected via Ethernet (crossover cable) I'm guessing that receive buffering may not be necessary as there should be no delayed, out-of-order or lost packets.
I have rebuilt Chromium after adjusting the kMaxVideoDelayMs value in jitter_buffer_common.h. This has given mixed results including creating erratic behavior with receiving video (choppy) as well as making googPlisSent steadily rise over time. In addition, googJitterBufferMs and googTargetDelayMs jump around erratically when kMaxVideoDelayMs is set lower than a certain threshold (around 60ms). Everything appears to work well with kMaxVideoDelayMs set to 100ms but I would like to try to reduce overall delay as much as possible.
I would like to know if it is possible to disable or bypass the receive jitter buffer altogether as it seems that might reduce the overall delay between capturing video on the transmitting PC and displaying it on the receiving PC.
You still need a jitter buffer to store the packets until you have an entire frame (and do other related processing that's hung off the jitter buffer). Audio jitter buffers usually effectively run things, and control when audio/video get displayed. That's all deep in NetEq, and likely can't be disabled.
If you run audio and video as separate streams (not synced, or no audio), then video should already run pretty much as fast as possible, but if there is delay, it's due to the OS scheduling, and there may also be some amount of pacing delay in the DeliverFrame code (or rather the code that ends up calling DeliverFrame).

How to increase battery efficiency for a voice recording application in Windows Phone 8?

I have developed a voice recording app using WasApi for Windows Phone 8. But users are facing battery problem a lot and also the screen is not getting timeout while the recording is on.
And if users press the lock button on background recording is getting paused. Can anyone tell me how to solve these issues?
I am unaware of a way to turn off the screen while recording, or of a way to record while the application is in the background. That does not mean it's not possible, only that I don't know how. It may not be possible now, but become possible in the future. Other answers may explain how to do this.
So I'll list ways to reduce battery consumption while your application is running in the foreground and the screen is on:
Black display. Bright images require a lot more power than dark ones. Depending on the display technology, black pixels require a lot less power than dark pixels. Look at the Lumia Glance feature which can be always on and still requires days to drain the battery.
No animations. Depending on the display technology, redrawing the screen may require more power. In any case, calculating the animation to be drawn on the screen prevents the CPU from sleeping. Having an animation that only updates every second instead of every 15 milliseconds should already be a big improvement.
No wait loops/busy wait. If the CPU needs to wait for something don't use this pattern:
while (true)
{
if (arewethereyet())
break;
}
Cluster work into batches. The CPU needs to be able to sleep and ideally it needs to be able to sleep for long continuous periods of time. Use a long buffer duration for the microphone and don't fetch the buffer too aggressively.

Flash Embedded FLV Memory Leak

I am making a game where I have several small character MovieClips which appear on screen randomly. There can be several characters of the same type, and when they are removed from the stage I store them in a memory pool to reuse them.
These characters have several different keyframes which I call to make them do specific things, like fly, land, etc. To improve performance flvs were made for their different actions and these have been embedded in the timeline.
I am having a problem where the amount of memory assigned to Video is constantly increasing as the game is played, even though I am not making more instances of the characters. I have been researching into garbage collecting video but all the stuff I find is for when using the FLVPlayback component and I haven't found anything helpful.
Does anyone have any ideas?
Thanks!
How much is your memory increasing? If it's starting at ie. 80 MB and slowy increasing ie. to 140, and then either staying there or decreasing to 120 and again going slightly up, then there's no need to worry. Unfortunately that's how the Flash GC works. Even if you're not leaking any memory it will slowly show memory increase (and then sudden bob down, as the GC collects trash, and again slowly up).
However, it could be also that you have a real memory leak, but for that to assess, you'd need to post some code. Btw, using memory-pools is a great idea in games, good you're doing it already.

Actionscript: Playing sound from microphone on speakers (through buffer) with constant delay

I'm looking for an example of code that samples signal from microphone and
plays it on speakers. I need a solution that has a resonably constant delay on
different platforms (PC, android, iphone). Delay around 1-2 s is ok for me, and I don't
mind if it varies everytime the application starts.
I tried using SampleDataEvent.SAMPLE_DATA event on Sound and Microphpne classess.
One event would put data into buffer the other would read data.
But it seems impossible to provide constant delay, either the delay grows constantly or
it gets lower to the point where I have less than 2048 samples to put out and Sound class stops
generating SampleDataEvent.SAMPLE_DATA events.
I wan't to process each incoming frame so using setLoopBack(true) is not an option.
ps This is more a problem on Android devices than on PC. Althought when I start to resize application
window on PC delay starts to grow also.
Please help.
Unfortunately, this won't be possible... at least not directly.
Some sound devices will use a different clock between recording and playback. This will be especially true for cell phones where what is running the microphone may very well be different hardware than the headphone audio output.
Basically, if you record at 44.1kHz and play back at 44.1kHz, but those clocks are not in sync, you may be recording at 44.099kHz and play back at 44.101kHz. Over time, this drift will mean that you won't have enough data in the buffer to send to the output.
Another complication (and more than likely your problem) is that your record and playback sample rates may be different. If you record from the microphone at 11kHz and playback at 48kHz, you will note that 11 doesn't evenly fit into 48. Software is often used to up-sample the recording. Sometimes this is done with a nice algorithm which is guaranteed to give you the necessary output. Other times, that 11kHz will get pushed to 44kHz and is deemed "close enough".
In short, you cannot rely on recording and playback devices being in sync, and will need to synchronize yourself. There are many algorithms out there for handling this. The easiest method is to add a sample here and there that averages the sample before and after it. If you do this with just a few samples, it will be inaudible. Depending on the kind of drift problem you are having, this may be sufficient.

Timers in AS3 can not tick fast enough

I'm making a game in AS3 that requires a huge amount of bullets to be fired sequentially in an extremely short amount of time. For example, at a certain point, I need to fire one bullet, every 1-5 millisecond, for about 1 second. The game runs (smoothly) at 60 FPS with around 800+ objects on screen, but the timers don't seem to be able to tick faster than my framerate (around once every 16 milliseconds). I only have one enterFrame going, which everything else updates from.
Any tips?
The 16 milliseconds sounds about right... According to the docs it has a resolution no smaller than 16.6 seconds.
delay:Number — The delay between timer events, in milliseconds. A delay lower than 20 milliseconds is not recommended. Timer frequency is limited to 60 frames per second, meaning a delay lower than 16.6 milliseconds causes runtime problems.
I would recommend that you create x objects (bullets) off-screen, at different offsets, on each tick to get the required amount of objects you want in 1 second. This assumes that your context allows for enemies off-screen to shoot.
How can you possibly have 800+ objects on screen? is each object a single pixel or is the entire screen just filled? I mean to be fair I have a 1920x1080 screen in front of me so each object could be 2 pixels wide and 2 pixels tall and it wouldn't quite fill the entire screen width wise 1600x1600. I'm just curious why you would have such a scenario as I've been toying with game development a bit.
As for the technical question a Timer is not guaranteed to be triggered at the moment after the duration has expired (just some time after), it depends on how quickly it's able to get around to processing the code for the timer tick. My guess is having so many objects is exhausting the CPU (on *NIX systems use top in the console to see or in Windows use task manager is it peaking a core of the cpu?). This can probably confirm or deny it or if you turn off the creation/updating of your objects and see if the timer itself ticks at the correct rate then. If either is true it suggests the CPU is peaking out.
Consider using Stage3D to offload the object drawing to the GPU to free up the CPU to run your Timer. You may also want to consider a "game framework" like flixel to help manage your resources though I don't know that it takes advantage of the GPU... actually just Googled and found an interesting post discussing it:
http://forums.flixel.org/index.php?topic=6101.0