How to record system background sound (loopback recording) with Audio Queue in Mac OS? - audioqueue

Using audio queue we can record the sound from system's default device such as microphone. I just want to record the background sound (the buffer of the sound card device) of the system not want to open microphone.
Should I use AudioQueueNewOutput?
I want to record the Built-in Output of the audio device(loopback recording).

Related

How to play FLV video at "incoming frames speed" (not video-internal) coming from NetStream in ActionScript

How to play NetStream frames immediatly as they arrive without any additional AS framerate logic?
I have recorded some audio & video data packets from RTMP protocol received by Red5, and now I'm trying to send it back to the flash client in a loop by pushing packets to NetStream with incrementing timestamp. The looped sequence has length of about ~20 sec nad is build from about ~200 RTMP packets (VideoData/AudioData)
Environment: both Flash client and server on localhost, no network bottleneck, video is H.264 encoded earlier by same Flash client.
It generaly works, but video is not very fluent - there ale lot of freezes, slowdowns and long pauses. The slower packets transmitting causing the more pauses and freezes., even extreme long pauses like transmiting whole sequence 2x-3x times (~60 sec) without effect - this comes up when forwarding slower than ~2 RTPM packets per second.
The problem looks like some AS-logic is trying to force framerate of a video, not just output received frames, so one of my questions is does AS looks for in-video-frame fps info in live streaming? why it can play faster, but can't play slower? How can I play video "by frames" not synchronizing video fps with RTPM packets timestamps?
On the other side, if I push packets faster than recorder, the video is just faster but almost fluent - I just can't get slower or stable stream (still very irregular speed).
I have analysed some NetStream values:
.bufferLength = ~0 or 0.001, incrasing when I forward packets
extremaly fast (like targeting ~90fps)
.currentFPS = shows real FPS
count seen in Video object, not incoming frames/s
.info.currentBytesPerSecond = ~8 kB/s to ~50kB/s depending on
forwarding speed
.info.droppedFrames = frequently incrases, even if I
stream packets like 2/sec! also jumps after long self-initiated-pause (but buffer
is whole time 0!)
.info.isLive = true
.info.dataBufferLength = same as .bufferLength
It looks like AS is dropping frames, because of too rare RTMP packets receive - like expecting that they will arrive with internal-frame-encoded-fps-speed.
My currently best NetStreamconfiguration:
chatStream.videoReliable = false;
chatStream.audioReliable = false;
chatStream.backBufferTime = 0;
chatStream.bufferTime =0;
Note that if I set bufferTime to 1, video is paused until gathering "1 second of video" but this is not true - buffering is very slow, like assuming that video has FPS of 100 or 200 - even if I'm forwarding packets fast (like targeting ~15fps without buffer), the buffer is filing about 10-20 seconds.
Loop, of course, starts with keyframed video data and keyframe interval of sequence is about 15 frames.
Have you tried netStream.step(1)
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/NetStream.html#step()
Also I would remove 'live' from the play() call.
Finally, maybe flash tries to sync to the audio like it does in regular timeline. Have you tried a video without audio channel?

Recording high quality video using Flash and Red5 Media Server

I'm running a Video Recorder application (written in ActionScript 3.0) from my local machine. It records using a Red5 server which is installed on a remote Amazon EC2 server.
To record, I'm using the following settings
Width, height and FPS (for Camera.setMode()) - 1920 x 1080 and 10
Bandwidth and Quality (for Camera.setQuality()) - 0 and 80
Buffer time (for NetStream.setQuality()) - 3600
I'm able to record video till the buffer gets filled (I'm monitoring the NetStream.BufferLength constantly)
Once, the recording is stopped - the data in the buffer is sent to the server. And now, when I try to playback with (bufferTime = 1) The video doesn't appear.
I have ssh 'ed into the EC2 server and have seen that the file does get created in the red5/webapps/vod/streams folder, but I'm am unsure about its quality or whether it has recorded correctly or not. I've even used the command line based movie player mplayer to try and play the file, but it doesn't play because I'm guessing the Ec2 server Ubuntu lacks the playback plugins (not sure of this though.)
However, when it's a low quality recording with 640 x 480 instead of 1920 x 1080, the buffer doesn't get filled beyond 0.1 or 0.2, and the video plays back smoothly.
My Internet upload speed is around 300 kbps.
How can I (if it is possible), record and then playback high quality video?
You need to use
// Ensure that no more than 43690.6(43K/second) is used to send video.
camera.setQuality(43690.6,0);
This works for me. I used Amazon EC2 extra large instance.
Your issue stems from these 3 causes happening simultaneously:
recording high quality video which results in the data having to be buffered locally
Flash Player buffering just the video data (good for when doing live streaming)
Red5's buggy mechanism for dealing with video data coming at the end of the recording
Red5 has been plagued by many recording issues. This HDFVR documentation article covers Red5's various recording issues and the mechanism for coping with the FP buffer when recording over slow connections.
The media server needs to account for this by waiting more for the video packets and sort them together with the audio packets before writing the data to disk (.flv file).
Red5 0.8 had no such mechanism thus recording high quality video over slow connections resulted in low quality/scrambled video files (just audio, all video at the end).
Red5 0.9 had audio video recording totally broken.
Red5 1.0 RC1 had a new delayed write mechanism - controlled by in Red5/conf/red5-common.xml - that waits for the audio and video data and rearranges the packets before writing them to disk. The queueThreshold value measures rtmp messages/packets.
Red5 1.0 final, 1.0.1 and 1.0.2 had the delayed write mechanism totally broken. With it turned on, over slow connections, Red5 was producing .flv files with only 1 or 2 video keyframes. When playing back such .flv files the video would get stuck from the 1st second and playback would continue only with audio. Using yamdi to extract the keyframe data confirmed that the .flv files were lacking video keyframes.
However, thanks to HDFVR's code contributions to Red5, Red5 1.0.3 and later has video recording over slow connections fixed.

Why am I missing frames while recording with Flash?

I'm recording video from a webcam with Flash, saving it on an Adobe (Flash) Media Server
What are all the things that can contribute to a choppy video full of missing frames? What do I need to adjust to fix this, or what can contribute to the problem?
The server is an Amazon Web Services Medium (M1) EC2 instance. 2 ghz processor, with 3.75gb RAM Looking on the admin console for AMS, The server is never getting maxed out in terms of RAM or CPU percentage
Bandwidth is never over 4mb.
The flash recorder captures at 320x240 at 15fps
I used setQuality(0,100) on the camera. I can still make out individual "pixels" when viewing my recording, but it isn't bad
Server has nothing to do with this. If the computer running the Flash file can't keep up, then you get dropped frames. You basically have 1000/stage.framerate ms to run every calculation for every frame. If your application is running at 30 fps, that is roughly 33ms per frame. You need to make sure everything that needs to happen on each frame can run in that amount of time, which is obviously difficult/impossible to do with the wide range of hardware out there.
Additionally, 15fps itself is too low. The low-end threshhold for what the human eye can perceive as motion is around 24 fps. So at 15fps, you will notice choppiness. Ideally, you want to record at 30fps, which is near the is about where the human eye stops being able to distinguish individual frames.

Obtain the result ByteArray of the current playing sounds

I am developing an AIR application for desktop that simulate a drum set. Pressing the keyboard will result in a corresponding drum sound played in the application. I have placed music notes in the application so the user will try to play a particular song.
Now I want to record the whole performance and export it to a video file, say flv. I have already succeed in recording the video using this encoder:
http://www.zeropointnine.com/blog/updated-flv-encoder-alchem/
However, this encoder does not have the ability to record sound automatically. I need to find a way to get the sound in ByteArray at that particular frame, and pass it to the encoder. Each frame may have different Sound objects playing at the same time, so I need to get the bytes of the final sound.
I am aware that SoundMixer.computeSpectrum() can return the current sound in bytes. However, the ByteArray returned has a fixed length of 512, which does not fit in the requirement of the encoder. After a bit of testing, with a sample rate 44khz 8 bit stero, the encoder expects the audio byte data array to have a length of 5880. The data returned by SoundMixer.computeSpectrum() is much much shorter than the encoder required.
My application is running at 60FPS, and recording at 15FPS.
So my question is: Is there any way I can obtain the audio bytes in the current frame, which is mixed by more than one Sound objects, and has the data length enough for the encoder to work? If there is no API to do that, I will have to mix the audio and get the result bytes by myself, how can that be done?

Actionscript: Playing sound from microphone on speakers (through buffer) with constant delay

I'm looking for an example of code that samples signal from microphone and
plays it on speakers. I need a solution that has a resonably constant delay on
different platforms (PC, android, iphone). Delay around 1-2 s is ok for me, and I don't
mind if it varies everytime the application starts.
I tried using SampleDataEvent.SAMPLE_DATA event on Sound and Microphpne classess.
One event would put data into buffer the other would read data.
But it seems impossible to provide constant delay, either the delay grows constantly or
it gets lower to the point where I have less than 2048 samples to put out and Sound class stops
generating SampleDataEvent.SAMPLE_DATA events.
I wan't to process each incoming frame so using setLoopBack(true) is not an option.
ps This is more a problem on Android devices than on PC. Althought when I start to resize application
window on PC delay starts to grow also.
Please help.
Unfortunately, this won't be possible... at least not directly.
Some sound devices will use a different clock between recording and playback. This will be especially true for cell phones where what is running the microphone may very well be different hardware than the headphone audio output.
Basically, if you record at 44.1kHz and play back at 44.1kHz, but those clocks are not in sync, you may be recording at 44.099kHz and play back at 44.101kHz. Over time, this drift will mean that you won't have enough data in the buffer to send to the output.
Another complication (and more than likely your problem) is that your record and playback sample rates may be different. If you record from the microphone at 11kHz and playback at 48kHz, you will note that 11 doesn't evenly fit into 48. Software is often used to up-sample the recording. Sometimes this is done with a nice algorithm which is guaranteed to give you the necessary output. Other times, that 11kHz will get pushed to 44kHz and is deemed "close enough".
In short, you cannot rely on recording and playback devices being in sync, and will need to synchronize yourself. There are many algorithms out there for handling this. The easiest method is to add a sample here and there that averages the sample before and after it. If you do this with just a few samples, it will be inaudible. Depending on the kind of drift problem you are having, this may be sufficient.