I am just curious. I am making a very simple game and I want to include the game logic when the user claps (like do the loud sound). Is it possible in libgdx?
There is AudioRecorder interface in libgdx for the same. You can access PCM data from the microphone on a PC or Android phone.
int samples = 44100;
boolean isMono = true;
AudioRecorder recorder = Gdx.audio.newAudioRecorder(samples, isMono);
This will create an AudioRecorder with a sampling rate of 44.1khz, in mono mode. If the recorder couldn't be created, a GdxRuntimeException will be thrown.
Samples can be read as 16-bit signed PCM:
int seconds = 5;
final short[] data = new short[samples * seconds];
recorder.readSamples(data, 0, data.length);
Audio recording is not supported in the JavaScript/WebGL backend.
Related
I'm using MediaCodec with targetSdkVersion 28 (Android 9) to convert the PCM stream from AudioRecorder into AMR for a 3GPP/GSM VoIP application. The onInputBufferAvailable() callback calls AudioRecord.read(bb, bb.limit()) to queue the PCM samples to the encoder through the available ByteBuffer and the onOutbufferAvailable() callback accepts the AMR frame and passes it down to the RTP layer for packetization and transmission. This works well on a variety of Android 7, 8 and 9 devices that we've been testing with.
However, on a Samsung XCoverPro running Android 10, the onOutbufferAvailable() callback isn't being triggered until 26 AMR frames are available, instead of a single frame as has happened previously. Given that each frame represents 20ms of audio, this is causing an audio delay of over half a second. So my question is, what control do I have over a MediaCodec audio encoder to get it to trigger the onOutputBufferAvailable() callback when a particular number of frames (ideally between 1 and 5) are available?
The encoder is created like this...
String mimeType = MediaFormat.MIMETYPE_AUDIO_AMR_NB;
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, mimeType);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE, sampleRate);
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
format.setInteger(MediaFormat.KEY_BIT_RATE, bitRate);
MediaCodec encoder = MediaCodec.createEncoderByType(mimeType);
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
I've experimented with the MediaFormat.KEY_MAX_INPUT_SIZE parameter but that doesn't appear to have any effect, and setting MediaFormat.KEY_LATENCY didn't help either (any anyway the docs say it only applies to video codecs).
Any suggestions?
I am working on a Internet radio streaming application for Windows Phone. I am researching the best-practices and different implementations. I am using the following code to read the stream:
private MemoryStream bufferStream;
private Stream stream;
...
...
...
byte[] data = new byte[2048];
int read;
while (true)
{
read = stream.Read(data, 0, data.Length);
BinaryWriter bw = new BinaryWriter(bufferStream);
bw.Write(data);
bw.Flush();
}
I am not sure whether this is an efficient way. Also, I saw an circular buffer implementation. What is the most-efficient way to stream the music without any "hiccups", "artifacts" and interruptions?
I found phonesm project on codeplex that provides great examples and functionality for implementing internet audio streaming.
Have a look over here too:
http://www.c-sharpcorner.com/uploadfile/dhananjaycoder/smooth-streaming-on-windows-phone-7/
Hope this helps!
I have a desktop application which streams raw PCM data to my browser over a websocket connection. The stream looks like this ...\\x00\\x00\\x02\\x00\\x01\\x00\\x00\\x00\\x01\\x00\\xff\\xff\\xff\\xff\\....
The question is simple: can I play such a stream in HTML with the Web Audio API / WebRTC / ...?
Any suggestions are very welcome!
code edit
This code plays noise, randomly generated:
function myPCMSource() {
return Math.random() * 2 - 3;
}
var audioContext;
try {
window.AudioContext = window.AudioContext || window.webkitAudioContext;
audioContext = new AudioContext();
} catch(e) {
alert('Web Audio API is not supported in this browser');
}
var bufferSize = 4096;
var myPCMProcessingNode = audioContext.createScriptProcessor(bufferSize, 1, 1);
myPCMProcessingNode.onaudioprocess = function(e) {
var output = e.outputBuffer.getChannelData(0);
for (var i = 0; i < bufferSize; i++) {
output[i] = myPCMSource();
}
}
So changing the myPCMSource() to the websocket stream input, should make it work somehow. But it doesn't. I don't get any errors, but the API is not playing any sound nor noise.
Use a ScriptProcessorNode, but be aware that if there is too much load on the main thread (the thread that runs your javascript, draws the screen, etc.), it will glitch.
Also, your PCM stream is probably in int16, and the Web Audio API works in terms of float32. Convert it like so:
output_float[i] = (input_int16[i] / 32767);
that is, go from a [0; 65535] range to a [-1.0; 1.0] range.
EDIT
I was using output_float[i] = (input_int16[i] / 32767 - 1);, this article shows that you should use output_float[i] = (input_int16[i] / 32767);. Now it's working fine!
Just for the record, the ScriptProcessorNode is deprecated. See the MDN article for details. The feature was replaced by AudioWorklets and the AudioWorkletNode interface.
In short, a ScriptProcessorNode runs outside of the browser's internal audio thread, which creates at least on frame (128 samples) of latency. Worse, the ScriptProcessorNode often fails to respond quickly enough, if the thread is busy, so will just randomly drop the ball every so often.
Worklets are basically task-specific workers that run in one of the browsers internal threads (paint, layout, audio etc). Audio worklets run in the audio thread, and implement the guts of custom audio nodes, which are then exposed through the WebAudio API as normal.
Note: You are also able to run WebAssembly inside worklets to handle the processing.
The solution provided above is still useful, as the basic idea holds, but it would ideally use an audio worklet.
I have a problem with USB mic input. When using my laptops internal microphone the following recorded buffer plays back just fine:
microphone = Microphone.getMicrophone();
microphone.codec = SoundCodec.SPEEX;
microphone.setLoopBack(false);
microphone.rate = 16;
microphone.addEventListener(SampleDataEvent.SAMPLE_DATA, processMicData);
private function gotMicData(micData:SampleDataEvent):void {
micBuffer.writeBytes(micData.data);
}
But when I select the USB mic the sound stutters, like it's adding silence between the buffers. By the way, if I use a program like Audacity to record the USB microphone, everything works fine.
I would recommend trying to use the Microphone.setSilenceLevel() method. It allows you to set a level of microphone activity necessary for flash to read the audio input. Then, when no input is recorded, it won't write in silence when no information is received.
For more info:
http://help.adobe.com/en_US/ActionScript/3.0_ProgrammingAS3/WS5b3ccc516d4fbf351e63e3d118a9b90204-7d0c.html
I have a website working the same as youtube. At this moment I am trying to create a video image captured by WEBCAM.
The video image should be saved on my computer (by FLV format) first and then if the user is satisfied, he or she can upload it on the server
I am trying to use Actionscript3 in Adobe flash CS5 and Flash media server4
1- How can I do that?
2- Is the flash media server needed?
Please pay attention that we would like to allow the user to save video on his/her computer and then be able to uploaded to the server.
Many thanks.
Assuming the computer can take the overhead of doing the encoding on the fly (or has enough memory to buffer the data then can run the data through an encoding process) then the library mentioned in the SO answer here should work:
Encode video from any format to .flv format in AS3
I believe the Flash media server would only really be necessary in this case for broadcast.
Pseudocode example
private var cam:Camera;
public function Whatever()
{
//In constructor
addEventListener(Event.ENTER_FRAME, grabFrame);
cam = Camera.getCamera();
if (cam != null)
{
var vid:Video = new Video(cam.width, cam.height);
vid.attachCamera(cam);
addChild(vid);
}
}
private function grabFrame(event:Event):void
{
var bd:BitmapData = new BitmapData(cam.width, cam.height)
bd.draw(vid);
//now the BitmapData has a frame of the video, at this point you also
//would want to capture the audio then use the FLV class in the library
}
You can also check out using Red5 as an alternative open source video stream recorder.
http://distriqt.com/post/493
Cheers