I'm looking for an example of code that samples signal from microphone and
plays it on speakers. I need a solution that has a resonably constant delay on
different platforms (PC, android, iphone). Delay around 1-2 s is ok for me, and I don't
mind if it varies everytime the application starts.
I tried using SampleDataEvent.SAMPLE_DATA event on Sound and Microphpne classess.
One event would put data into buffer the other would read data.
But it seems impossible to provide constant delay, either the delay grows constantly or
it gets lower to the point where I have less than 2048 samples to put out and Sound class stops
generating SampleDataEvent.SAMPLE_DATA events.
I wan't to process each incoming frame so using setLoopBack(true) is not an option.
ps This is more a problem on Android devices than on PC. Althought when I start to resize application
window on PC delay starts to grow also.
Please help.
Unfortunately, this won't be possible... at least not directly.
Some sound devices will use a different clock between recording and playback. This will be especially true for cell phones where what is running the microphone may very well be different hardware than the headphone audio output.
Basically, if you record at 44.1kHz and play back at 44.1kHz, but those clocks are not in sync, you may be recording at 44.099kHz and play back at 44.101kHz. Over time, this drift will mean that you won't have enough data in the buffer to send to the output.
Another complication (and more than likely your problem) is that your record and playback sample rates may be different. If you record from the microphone at 11kHz and playback at 48kHz, you will note that 11 doesn't evenly fit into 48. Software is often used to up-sample the recording. Sometimes this is done with a nice algorithm which is guaranteed to give you the necessary output. Other times, that 11kHz will get pushed to 44kHz and is deemed "close enough".
In short, you cannot rely on recording and playback devices being in sync, and will need to synchronize yourself. There are many algorithms out there for handling this. The easiest method is to add a sample here and there that averages the sample before and after it. If you do this with just a few samples, it will be inaudible. Depending on the kind of drift problem you are having, this may be sufficient.
Related
I am trying to reduce the Chromium WebRTC video delay as much as possible for a remote machine control application. Since the transmitting and receiving PCs are directly connected via Ethernet (crossover cable) I'm guessing that receive buffering may not be necessary as there should be no delayed, out-of-order or lost packets.
I have rebuilt Chromium after adjusting the kMaxVideoDelayMs value in jitter_buffer_common.h. This has given mixed results including creating erratic behavior with receiving video (choppy) as well as making googPlisSent steadily rise over time. In addition, googJitterBufferMs and googTargetDelayMs jump around erratically when kMaxVideoDelayMs is set lower than a certain threshold (around 60ms). Everything appears to work well with kMaxVideoDelayMs set to 100ms but I would like to try to reduce overall delay as much as possible.
I would like to know if it is possible to disable or bypass the receive jitter buffer altogether as it seems that might reduce the overall delay between capturing video on the transmitting PC and displaying it on the receiving PC.
You still need a jitter buffer to store the packets until you have an entire frame (and do other related processing that's hung off the jitter buffer). Audio jitter buffers usually effectively run things, and control when audio/video get displayed. That's all deep in NetEq, and likely can't be disabled.
If you run audio and video as separate streams (not synced, or no audio), then video should already run pretty much as fast as possible, but if there is delay, it's due to the OS scheduling, and there may also be some amount of pacing delay in the DeliverFrame code (or rather the code that ends up calling DeliverFrame).
I have developed a voice recording app using WasApi for Windows Phone 8. But users are facing battery problem a lot and also the screen is not getting timeout while the recording is on.
And if users press the lock button on background recording is getting paused. Can anyone tell me how to solve these issues?
I am unaware of a way to turn off the screen while recording, or of a way to record while the application is in the background. That does not mean it's not possible, only that I don't know how. It may not be possible now, but become possible in the future. Other answers may explain how to do this.
So I'll list ways to reduce battery consumption while your application is running in the foreground and the screen is on:
Black display. Bright images require a lot more power than dark ones. Depending on the display technology, black pixels require a lot less power than dark pixels. Look at the Lumia Glance feature which can be always on and still requires days to drain the battery.
No animations. Depending on the display technology, redrawing the screen may require more power. In any case, calculating the animation to be drawn on the screen prevents the CPU from sleeping. Having an animation that only updates every second instead of every 15 milliseconds should already be a big improvement.
No wait loops/busy wait. If the CPU needs to wait for something don't use this pattern:
while (true)
{
if (arewethereyet())
break;
}
Cluster work into batches. The CPU needs to be able to sleep and ideally it needs to be able to sleep for long continuous periods of time. Use a long buffer duration for the microphone and don't fetch the buffer too aggressively.
I'm recording video from a webcam with Flash, saving it on an Adobe (Flash) Media Server
What are all the things that can contribute to a choppy video full of missing frames? What do I need to adjust to fix this, or what can contribute to the problem?
The server is an Amazon Web Services Medium (M1) EC2 instance. 2 ghz processor, with 3.75gb RAM Looking on the admin console for AMS, The server is never getting maxed out in terms of RAM or CPU percentage
Bandwidth is never over 4mb.
The flash recorder captures at 320x240 at 15fps
I used setQuality(0,100) on the camera. I can still make out individual "pixels" when viewing my recording, but it isn't bad
Server has nothing to do with this. If the computer running the Flash file can't keep up, then you get dropped frames. You basically have 1000/stage.framerate ms to run every calculation for every frame. If your application is running at 30 fps, that is roughly 33ms per frame. You need to make sure everything that needs to happen on each frame can run in that amount of time, which is obviously difficult/impossible to do with the wide range of hardware out there.
Additionally, 15fps itself is too low. The low-end threshhold for what the human eye can perceive as motion is around 24 fps. So at 15fps, you will notice choppiness. Ideally, you want to record at 30fps, which is near the is about where the human eye stops being able to distinguish individual frames.
I am making a game where I have several small character MovieClips which appear on screen randomly. There can be several characters of the same type, and when they are removed from the stage I store them in a memory pool to reuse them.
These characters have several different keyframes which I call to make them do specific things, like fly, land, etc. To improve performance flvs were made for their different actions and these have been embedded in the timeline.
I am having a problem where the amount of memory assigned to Video is constantly increasing as the game is played, even though I am not making more instances of the characters. I have been researching into garbage collecting video but all the stuff I find is for when using the FLVPlayback component and I haven't found anything helpful.
Does anyone have any ideas?
Thanks!
How much is your memory increasing? If it's starting at ie. 80 MB and slowy increasing ie. to 140, and then either staying there or decreasing to 120 and again going slightly up, then there's no need to worry. Unfortunately that's how the Flash GC works. Even if you're not leaking any memory it will slowly show memory increase (and then sudden bob down, as the GC collects trash, and again slowly up).
However, it could be also that you have a real memory leak, but for that to assess, you'd need to post some code. Btw, using memory-pools is a great idea in games, good you're doing it already.
I am making racing game in Libgdx.My game apk size is 9.92 mb and I am using four texture packer of total size is 9.92 Mb. My game is running on desktop but its run on android device very slow. What is reason behind it?
There are few loopholes which we neglect while programming.
Desktop processors are way more powerful so the game may run smoothly on Desktop but may slow on mobile Device.
Here are some key notes which you should follow for optimum game flow:
No I/O operations in render method.
Avoid creating Objects in Render Method.
Objects must be reused (for instance if your game have 1000 platforms but on current screen you can display only 3, than instead of making 1000 objects make 5 or 6 and reuse them). You can use Pool class provided by LibGdx for object pooling.
Try to load only those assets which are necessary to show on current screen.
Try to check your logcat if the Garbage collector is called. If so than try to use finalize method of object class to find which class object are collected as garbage and try to improve on it.
Good luck.
I've got some additional tips for improving performance:
Try to minimize texture bindings (or generally bindings when you're making a 3D game for example) in you render loop. Use texture atlases and try to use one texture after binding as often as possible, before binding another texture unit.
Don't display things that are not in the frustum/viewport. Calculate first if the drawn object can even be seen by the active camera or not. If it's not seen, just don't load it onto your GPU when rendering!
Don't use spritebatch.begin() or spritebatch.end() too often in the render loop, because every time you begin/end it, it's flushed and loaded onto the GPU for rendering its stuff.
Do NOT load assets while rendering, except you're doing it once in another thread.
The latest versions of libgdx also provide a GLProfiler where you can measure how many draw calls, texture bindings, vertices, etc. you have per frame. I'd strongly recommend this since there always can be situations where you would not expect an overhead of memory/computational usage.
Use libgdx Poolable (interface) objects and Pool for pooling objects and minimizing the time for object creation, since the creation of objects might cause tiny but noticable stutterings in your game-render loop
By the way, without any additional information, no one's going to give you a good or precise answer. If you think it's not worth it to write enough text or information for your question, why should it be worth it to answer it?
To really understand why your game is running slow you need to profile your application.
There are free tools avaiable for this.
On Desktop you can use VisualVM.
On Android you can use Android Monitor.
With profiling you will find excatly which methods are taking up the most time.
A likely cause of slowdowns is texture binding. Do you switch between different pages of packed textures often? Try to draw everything from one page before switching to another page.
The answer is likely a little more that just "Computer fast; phone slow". Rather, it's important to note that your computer Java VM is likely Oracles very nicely optimized JVM while your phone's Java VM is likely Dalvik, which, to say nothing else of its performance, does not have the same optimizations for object creation and management.
As others have said, libGDX provides a Pool class for just this reason. Take a look here: https://github.com/libgdx/libgdx/wiki/Memory-management
One very important thing in LibGDX is that you should make sure that sometimes loading assets from the memory cannot go in the render() method. Make sure that you are loading the assets in the right times and they are not coming in the render method.
Another very important thing is that try to calculate your math and make it independent of the render in the sense that your next frame should not wait for calculations to happen...!
These are the major 2 things i encountered when I was making the Snake game Tutorial.
Thanks,
Abhijeet.
One thing I have found, is that drawing is laggy. This means that if you are drawing offscreen items, then it uses a lot of useless resources. If you just check if they are onscreen before drawing, then your performance improves by a lot surprisingly.
Points to ponder (From personal experience)
DO NOT keep calling a function,in render method, that updates something like time,score on HUD (Make these updates only when required eg when score increases ONLY then update score etc)
Make calls IF specific (Make updations on certain condition, not all the time)
eg. Calling/updating in render method at 60FPS - means you update time 60 times a sec when it just needs to be updated once per sec )
These points will effect hugely on performance (thumbs up)
You need to check the your Image size of the game.If your image size are more than decrease the size of images by using the following link "http://tinypng.org/".
It will be help you.