How can I generate a tone (pure sine wave, for instance) using only javascript and Chromium's WebAudio API?
I would like to accomplish something like the Firefox equivalent.
The Chromium WebAudio demos here appear to all use prerecorded <audio> elements.
Thanks!
The Web Audio API has what's known as the Oscillator Interface to generate the tones you're talking about. They're pretty straight forward to get going...
var context = new webkitAudioContext(),
//Call function on context
oscillator = context.createOscillator(); // Oscillator defaults to sine wave
oscillator.connect(context.destination);
oscillator.start();
You can change the type of wave by doing:
oscillator.type = 1; // Change to square wave.
or alternatively:
oscillator.type = oscillator.SQUARE;
I've written an article about this very topic in more detail, so that might be of some use to you!
Probably not these best way, but I used dsp.js to generate different types of sinusoids, then passed them off to the Web Audio API in this demo: http://www.htmlfivewow.com/demos/waveform-generator/index.html
Related
I have 2 separate audio files, that will play through the WebAudio API.
One contains background music and will always be on.
The other contains speech, and my graphics will react to the pitch of the sound. I have a working bit of code that uses the WebAudio.context together with an analyserNode and a dynamicNode to figure out the pitch of the currently playing sound file.
function init() {
this.context = ((Object)(createjs.WebAudioPlugin)).context;
this.analyserNode = this.context.createAnalyser();
this.analyserNode.fftSize = 32;
this.analyserNode.smoothingTimeConstant = 0.7;
this.analyserNode.connect(this.context.destination);
this.dynamicsNode = ((Object)(createjs.WebAudioPlugin)).dynamicsCompressorNode;
this.dynamicsNode.disconnect();
this.dynamicsNode.connect(this.analyserNode);
this.freqFloatData = new Float32Array(this.analyserNode.frequencyBinCount);
this.freqByteData = new Uint8Array(this.analyserNode.frequencyBinCount);
}
function onUpdate() {
this.analyserNode.getByteFrequencyData(this.freqByteData);
// ---- do stuff with this.freqByteData[0]; ---- \\
}
However, when i play the background music and the speech file simultaneously, i can only retrieve the pitch of both sounds as one.
Is it possible, through the WebAudio API, to get the pitch from a single sound file, while multiple are playing?
If you mean "can I apply equalization only to the music tracks, and have the speech sounds unaffected" the answer is yes, definitely - equalization (and other filtering) is an inline effect, it doesn't need to be applied globally. If you mean "I have source material with speech and music mixed, can I separate them" then Ken's right, that's a much harder challenge.
I am trying to build an audio recorder app similar to iOS7 built in one and looking for guidance on what controls to use for the recording app. I understand I will be using a tableview for the list of previous recordings and a UIView for the top recording view and on tapping record adjust the table view and move down the black recording view.
How should I implement the endless horizontal scrolling view? Should I use a collection view and keep adding elements to the model array as the time increments. Also what should I use for the timer. Is there something like setInterval for Objective C like in Javascript that I can use to keep updating the UI at regular time interval?
If someone also knows of a cocoa pod or sample code that would be greatly appreciated.
For recording the simplest audiorecorder is AVAudioRecorder. Here is a simple implementation of an audio recording app: https://github.com/calmez/Recorder. AVAudioRecorder has simple metering methods where you can read volume output of the channels
Honestly though, Apple would probably use CoreAudio to get the audio because it is more optimized. Novocaine is a good core audio engine that could get you started https://github.com/alexbw/novocaine
For rendering the waveform, I would guess that Apple probably uses OpenGL. I don't see how to do it easily and efficiently otherwise. You could draw them using the standard drawing APIs for UIView like this project does (https://github.com/fulldecent/FDWaveformView) but I don't see this animating well.
For the timer, there is NSTimer
I want to write a Windows Store App that can capture video (without any sound) and take pictures. Imagine a digital camera: you can preview the picture on the screen of your device before pushing the button which takes the pic.
The problem I'm facing now is the fact that the Windows.Media.Capture namespace has only classes for objects that capture video with sound (CameraCaptureUI, MediaCapture). I'm not troubled by the objects' capabilities, but by the fact that I will have to include in the manifest of the app the Microphone capability and it does not make sense for the app to use it. I need a class that uses only the Webcam capability.
Any ideas?
I found the answer and I thought I should share it. I'm sorry for answering my own question, but here goes:
One can specify in the settings of the MediaCapture object, when initializing it, that it will use only the Video part:
var mediaCaptureMgr = new MediaCapture();
var captureSettings = new MediaCaptureInitializationSettings();
captureSettings.StreamingCaptureMode = StreamingCaptureMode.Video;
await mediaCaptureMgr.InitializeAsync(captureSettings);
RTFM!
I embedded live web cam to html page. Now i want to find hand gestures. How to do this using JavaScript, I don't have idea. I Googled lot but didn't get any good idea to complete this. So any one know about this? how to do this.
Accessing the webcam requires the HTML5 WebRTC API which is available in most modern browsers apart from Internet Explorer or iOS.
Hand gesture detection can be done in JavaScript using Haar Cascade Classifiers (ported from OpenCV) with js-objectdetect or HAAR.js.
Example using js-objectdetect in JavaScript/HTML5: Open vs. closed hand detection (the "A" gesture of the american sign language alphabet)
Here is a JavaScript hand-tracking demo -- it relies on HTML5 features which are not yet enabled in all typical browsers, it doesn't work well at all here, and I don't believe it covers gestures, but it might be a start for you: http://code.google.com/p/js-handtracking/
You need to have some motion detecting device (Camera) and you can use kinect to get the motion of different parts of the body. You will have to send data in browser telling about the body parts and position where you can manipulate the data according to your requirement
Here you can find how you can make it. Motion detection and rendering
More about kinect General info
While this is a really old question, theres some new opportunities to do handtracking using fast neural networks and images from a webcam. And in Javascript. I'd recommend the Handtrack.js library which uses Tensorflow.js just for this purpose.
Simple usage example.
<!-- Load the handtrackjs model. -->
<script src="https://cdn.jsdelivr.net/npm/handtrackjs/dist/handtrack.min.js"> </script>
<!-- Replace this with your image. Make sure CORS settings allow reading the image! -->
<img id="img" src="hand.jpg"/>
<canvas id="canvas" class="border"></canvas>
<!-- Place your code in the script tag below. You can also use an external .js file -->
<script>
// Notice there is no 'import' statement. 'handTrack' and 'tf' is
// available on the index-page because of the script tag above.
const img = document.getElementById('img');
const canvas = document.getElementById('canvas');
const context = canvas.getContext('2d');
// Load the model.
handTrack.load().then(model => {
// detect objects in the image.
model.detect(img).then(predictions => {
console.log('Predictions: ', predictions);
});
});
</script>
Demo
Running codepen
Also see a similar neural network implementation in python -
Disclaimer .. I maintain both projects.
I want to create a swf or air app that takes an input and displays a graphic representation of it.
The graphic side is fine for the moment, but its always using an mp3 for example
var sound:Sound = new Sound (new URLRequest("myMP3.mp3"));
- i want it to be able to take a 'live' audio feed. How would I do that and is there a better way than using the microphone input?
thanks
dai2
Microphone input is probably the easiest way and using flash player 10.1 or AIR 2.0 you can do this. Take a look at http://www.adobe.com/devnet/air/flex/articles/using_mic_api.html
It's not as easy as using SoundMixer.computeSpectrum but offers the possibility (+ more). You'll for instance most likely want to run the data through FFT to use in visualization