I'm pretty new to the HTML5 audio api: I've read some of the related articles at HTML5 Rocks, but it can be a little tricky flipping between Javascript and Dart at times.
In any case, I've been experimenting with HTML5 Audio in Dart. To produce sound effects for a simple game, I created a class as follows. I created an AudioContext, loaded sound data into SoundBuffers, and when the sound needed to be played, created an AudioBufferSourceNode via which to play the data stored in the buffers:
class Sfx {
AudioContext audioContext;
List<Map> soundList;
int soundFiles;
Sfx() {
audioContext = new AudioContext();
soundList = new List<Map>();
var soundsToLoad = [
{"name": "MISSILE", "url": "SFX/missile.wav"},
{"name": "EXPLOSION", "url": "SFX/explosion.wav"}
];
soundFiles = soundsToLoad.length;
for (Map sound in soundsToLoad) {
initSound(sound);
}
}
bool allSoundsLoaded() => (soundFiles == 0);
void initSound(Map soundMap) {
HttpRequest req = new HttpRequest();
req.open('GET', soundMap["url"], true);
req.responseType = 'arraybuffer';
req.on.load.add((Event e) {
audioContext.decodeAudioData(
req.response,
(var buffer) {
// successful decode
print("...${soundMap["name"]} loaded...");
soundList.add({"name": soundMap["name"], "buffer": buffer});
soundFiles--;
},
(var error) {
print("error loading ${soundMap["name"]}");
}
);
});
req.send();
}
void sfx(AudioBuffer buffer) {
AudioBufferSourceNode source = audioContext.createBufferSource();
source.connect(audioContext.destination, 0, 0);
source.buffer = buffer;
source.start(0);
}
void playSound(String sound) {
for (Map m in soundList) {
print(m);
if (m["name"] == sound) {
sfx(m["buffer"]);
break;
}
}
}
}
(The sound effects are in a folder "SFX". Now that I look at the code, there are probably a million better ways to organise the data, but that's besides the point right now.) I am able to play sound effects by creating an instance of Sfx and calling the method playSound.
e.g.
#import('dart:html');
#source('sfx.dart');
Sfx sfx;
void main() {
sfx = new Sfx();
window.on.keyUp.add((KeyboardEvent keX) {
sfx.playSound("MISSILE");
});
}
(Edit: added code to play sound when a key is hit.)
The problem is: although with the dart2js Javascript, the sound effects play as expected in Safari, when they are played in Dartium or (with the dart2js Javascript) in Chrome, they are distorted. (In Firefox, there are even worse problems!)
Is there anything obvious that I have neglected to do or that I need to take into account? Otherwise, are there any references or tutorials, preferably in a Dart context, that might help?
thanks for trying Dart!
First off, Firefox doesn't support Web Audio API (yet?) Chrome and Safari support Web Audio API. You can track adoption of Web Audio API here: http://caniuse.com/#feat=audio-api
Second, please try this Web Audio API sample in Dartium: https://github.com/dart-lang/dart-html5-samples/tree/master/web/webaudio/intro You will need to clone the repo first and run it locally. This sample works for me locally.
This sounds more like a bug report. If the sample from dart-html5-samples works for you, but your above code continues to be distorted, please open a bug at http://dartbug.com/new so we can take a look.
One thing to consider is waiting until the specific MISSLE sound is loaded before hooking up the keyUp handler.
Related
In our project, we use AudioContext to wire up input from a microphone to an AudioWorkletProcessor and out to a MediaStream. Ultimately, this is sent to other peers in a WebRTC call.
If someone loads the page, the audio always sounds fine. But if they connect with a hard-wired microphone like a laptop mic or webcam, then connect a bluetooth device (such as airpods or headphones), then the audio becomes distorted & robotic sounding.
If we tear out all the other code and simplify it, we still have the issue.
bypassProcessor.js
// Basic processor that wires input to output without transforming the data
// https://github.com/GoogleChromeLabs/web-audio-samples/blob/main/audio-worklet/basic/hello-audio-worklet/bypass-processor.js
class BypassProcessor extends AudioWorkletProcessor {
process(inputs, outputs) {
const input = inputs[0];
const output = outputs[0];
for (let channel = 0; channel < output.length; ++channel) {
output[channel].set(input[channel]);
}
return true;
}
}
registerProcessor('bypass-processor', BypassProcessor);
main.js
const microphoneStream = await navigator.mediaDevices.getUserMedia({
audio: true, // have also tried { channelCount: 1 } and { channelCount: { exact: 1 } }
video: false
})
const audioCtx = new AudioContext()
const inputNode = audioCtx.createMediaStreamSource(microphoneStream)
await audioCtx.audioWorklet.addModule('worklet/bypassProcessor.js')
const processorNode = new AudioWorkletNode(audioCtx, 'bypass-processor')
inputNode.connect(processorNode).connect(audioCtx.destination)
Interestingly, I have found if you comment out the 2 audio worklet lines and instead create a simple gain node, then it works fine.
// await audioCtx.audioWorklet.addModule('worklet/bypassProcessor.js')
// const processorNode = new AudioWorkletNode(audioCtx, 'bypass-processor')
const gainNode = audioCtx.createGain()
Also if you simply create the AudioWorkletNode, but don't even connect it to the others, this also reproduces the issue.
I've created a small React app here that reproduces the problem: https://github.com/JacobMuchow/audio_distortion_repro/tree/master
I've tried some options such as detecting when this happens using 'ondevicechange' event, closing the old AudioContext & nodes and recreating everything, but this only works some of the time. If I wait for some time and then recreate it again, it works so I'm worried about some type of garbage collection issue with the processor when attempting this, but that might be beside the point.
I suspect this has something to do with sample rates... when the AudioContext is correctly recreated it switches from 48 kHz to 16 kHz and then it sounds find. But sometimes it is recreated with 48 kHz still and it continues to sound robotic.
Threads on the internet concerning this are incredibly sparse and I'm hoping someone has specific experience with this issue or this API and can point out what I need to do differently.
For Chrome, the problem is very likely https://crbug.com/1090441 that was recently fixed. I think Firefox doesn't have this problem but I didn't check.
I want to play stream from gstreamer in a web browser.
I played around a with RTP, WebRTC and SDP files but, while VLC was able to connect to stream by simple SDP, browsers were not. I later understood that WebRTC requires secure connection which only complicates things and is not needed for my purposes. I stumbled upon Media Source Extension (MSE) of html5, which seems that it could help, but I'm not able to find some comprehensive tutorial or appropriate specs on how to get gstreamer to stream correct data and later how to play them using MSE. I'm also not sure about latency with using MSE.
So is there a way to play stream from gstreamer in a browser?
Thanks.
Using node webrtc project, I was able to combine output from gstreamer with webrtc call. For gstreamer, there is a project which enables it's use with node gstreamer superficial. So basically, you need to run gstremaer process from node process, which can then control output from gstremaer. On every gstreamer frame there is a callback called which takes the frame and can send it to webrtc calls.
Then an webrtc calls needs to be implemented. There is required some signaling protocol for calls. One side of the call will be the server and another will be the client's browser, instead of two browsers. Then a video track will be created where frames from gstreamer superficial will be pushed.
const { RTCVideoSource } = require("wrtc").nonstandard;
const gstreamer = require("gstreamer-superficial");
const source = new RTCVideoSource();
// This is WebRTC video track which should be used with addTransceiver see below
const track = source.createTrack();
const frame = {
width: 1920,
height: 1080,
data: null
};
const pipeline = new gstreamer.Pipeline("v4l2src ! videorate ! video/x-raw,format=YUY2,width=1920,height=1080,framerate=25/1 ! videoconvert ! video/x-raw,format=I420 ! appsink name=sink");
const appsink = pipeline.findChild("sink");
const pull = function() {
appsink.pull(function(buf, caps) {
if (buf) {
frame.data = new Uint8Array(buf);
try {
source.onFrame(frame);
} catch (e) {}
pull();
} else if (!caps) {
console.log("PULL DROPPED");
setTimeout(pull, 500);
}
});
};
pipeline.play();
pull();
// Example:
const useTrack = SomeRTCPeerConnection => SomeRTCPeerConnection.addTransceiver(track, { direction: "sendonly" });
I'm porting my Firefox extension to Google Chrome. I make heavy use of nsiChannel to read HTTP headers and such, like so:
//initialize the channel in onStartRequest
onStartRequest: function (req /*, ctx*/) {
var channel = req.QueryInterface(Components.interfaces.nsIChannel);
//...more init stuff here
}
onDataAvailable: function (req, ctx, stream, offset, count) {
//.. store the data from the stream for later processing...
stream_ctx.bstream.setInputStream(stream);
stream_ctx.bytes += stream_ctx.bstream.readBytes(count);
},
Does Google's Chrome browser have the equivalent functions? I've seen a bit of HTTP-listener-style stuff, but so far I haven't seen anything that has all of the features of nsiChannel. Still, Mozilla's docs on accessing the low-level stuff like this are a little better organized than what I've found for Chrome, so I might have just missed it.
EDIT:
I'm using a stream listener, starting with this:
Components.classes["#mozilla.org/network/io-service;1"]
.getService(Components.interfaces.nsIIOService)
.newChannel(swf_url, null, null)
.asyncOpen(stream_listener, null);
Here swf_url is the URL to a YouTube video.
The stream_listener is implemented to grab all the bytes from the incoming stream like so:
onDataAvailable: function (req, ctx, stream, offset, count) {
stream_ctx.bstream.setInputStream(stream);
stream_ctx.bytes += stream_ctx.bstream.readBytes(count);
},
When I get to onStopRequest I feed the bytes to a parser/decoder. What I don't know is how to replicate the onDataAvailable method to get the bytes into a stream I can feed into my parser.
I have captured 3 videos on my mobile which is by default stored on the phone gallery (Gallery/videos/). I have to play these 3 videos in one of my flex mobile application. How can I get the videos to the flex project? if I need to browse the mobile directory means kindly help me with some code to do so.
I too am looking for an answer to this question. Right now, based on other Stackoverflow discussions, exhaustive perusal of tutorials and Adobe documentation, and comments to both (often the more useful resource), I'm coming to the conclusion that it's not possible.
you can use CameraRoll.browseForImage() and open the iOS gallery of photos to see all entities of MediaType.IMAGE, but it will not show you MediaType.VIDEO
you can use CameraUI to launch the system camera by delegation and that returns a MediaPromise, but as far as I can tell, it does not save the video you capture anywhere, and I cannot find a way to access the captured video using the MediaPromise (at least using the Loader class)
Here's my code as a hint in that direction. The second code block is using the CameraRoll to browseForImage() but there is no browseForVideo() in the API.
if(CameraUI.isSupported)
{
camera = new CameraUI();
camera.addEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.addEventListener(Event.CANCEL, cameraCanceled);
camera.addEventListener(ErrorEvent.ERROR, cameraError);
camera.launch(MediaType.VIDEO);
}
else
{
statusText.text = "Camera not supported on this device.";
startTimer();
}
if (CameraRoll.supportsBrowseForImage)
{
roll = new CameraRoll();
roll.addEventListener(MediaEvent.SELECT, cameraRollEventComplete);
roll.addEventListener(Event.CANCEL, cameraCanceled);
roll.addEventListener(ErrorEvent.ERROR, cameraError);
roll.browseForImage();
}
else
{
statusText.text = "Camera roll not supported on this device.";
startTimer();
}
I've since found that Videos captured using the delegated system camera are stored in a temporary storage location that iOS -DOES!- allow access to. (I was pleasantly shocked.)
The Captured video is not added to the device's Camera Roll as other videos captured using the iOS System Camera app, so it's not enough to capture video and expect to be able to access it later (if, for instance, CameraRoll.browseForVideo() is ever added to the API.
Therefore, you have to 'get while the getting is good' and move the file from the temporary storage location to some non-volatile location such as ApplicationStorageDirectory or the user's Documents directory (The only options in iOS I think).
The MediaPromise... I think... is completely useless for accessing the video via any direct progressive loader/streamer method, but still provides the location/url/path/filename of the temporary file so you can perform File operations on it.
Ironic that there are tutorials for getting around the lack of a file location/url/path/filename in the MediaPromise when using CameraRoll.browseForImage()... and that method is to use a loader class to load the image content (which you can then write out to a file), but when taking video, the video content is not accessible, and instead a file location/url/path/filename is provided. Ironic that there are nearly no resources I was able to find to help with this also. grumble
I'm going to include some code chunks w/o really editing them to strip out extraneous bits because it's way past when I need to be in bed, but I wanted you to have this. I may come clean it up later.
This section is in a Spark SkinnablePopUpContainer and I use the same click event for several buttons, thus the below 'case' is in the switch-case in that event handler function.
In case you are not familiar, the 'close(true, data)' is the method to close the SkinnablePopUpContainer, tell the parent/owner that the container was closed purposefully and that it should look for the data object being shared back (i.e., there are changes to be 'commit'ed).
case "cameraVideo":
{
if(CameraUI.isSupported)
{
camera = new CameraUI();
camera.addEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.addEventListener(Event.CANCEL, cameraCanceled);
camera.addEventListener(ErrorEvent.ERROR, cameraError);
camera.launch(MediaType.VIDEO);
}
else
{
statusText.text = "Camera not supported on this device.";
startTimer();
}
break;
}
protected function cameraCanceled(event:Event):void
{
statusText.text = "Camera access canceled by user.";
startTimer();
}
protected function cameraError(event:ErrorEvent):void
{
statusText.text = "There was an error while trying to use the camera.";
startTimer();
}
protected function videoMediaEventComplete(event:MediaEvent):void
{
statusText.text="Preparing captured video...";
camera.removeEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.removeEventListener(Event.CANCEL, cameraCanceled);
camera.removeEventListener(ErrorEvent.ERROR, cameraError);
var media:MediaPromise = event.data;
data.MediaType = MediaType.VIDEO;
data.MediaPromise = media;
data.source = "camera video";
close(true,data)
}
This section is the Actionscript in the close handler of the parent/owner of the SkinnablePopUpContainer (truncated once the useful code is included)
private function choosePictureLightboxClosed(event:PopUpEvent):void
{
imageButtonsActive = false;
if(event.commit)
{
this.data = event.data as Object;
filters = new Array();
selection = true;
switch(data.MediaType)
{
case MediaType.VIDEO:
{
mediaType = "video";
trace(data.MediaPromise.file.url + " - " + data.MediaPromise.relativePath + " - " +data.MediaPromise.mediaType);
var sourceFile:File = new File(data.MediaPromise.file.url);
var destinationFile:File = File.applicationStorageDirectory.resolvePath("User" +parentApplication.userid);
if(destinationFile.exists && !destinationFile.isDirectory)
{
destinationFile.deleteFile();
}
destinationFile.createDirectory();
destinationFile = destinationFile.resolvePath("Videos");
if(destinationFile.exists && !destinationFile.isDirectory)
{
destinationFile.deleteFile();
}
destinationFile.createDirectory();
destinationFile = destinationFile.resolvePath(parentApplication.userid+"Video"+new Date().getTime()+".mov");
trace(destinationFile.nativePath);
sourceFile.moveTo(destinationFile,true);
break;
}
I sure do hope this helps. This has been a very frustrating (and costly in terms of our project being government grant funded and having deadlines we utterly failed to meet), and I very much hope that these hard-won solutions might help others avoid the same experience.
I want to write a basic script for HTML5 Web Audio API, can play some audio files. But I don't know how to unload a playing audio and load another one. In my script two audio files are playing in the same time,but not what I wanted.
Here is my code:
var context,
soundSource,
soundBuffer;
// Step 1 - Initialise the Audio Context
context = new webkitAudioContext();
// Step 2: Load our Sound using XHR
function playSound(url) {
// Note: this loads asynchronously
var request = new XMLHttpRequest();
request.open("GET", url, true);
request.responseType = "arraybuffer";
// Our asynchronous callback
request.onload = function() {
var audioData = request.response;
audioGraph(audioData);
};
request.send();
}
// This is the code we are interested in
function audioGraph(audioData) {
// create a sound source
soundSource = context.createBufferSource();
// The Audio Context handles creating source buffers from raw binary
soundBuffer = context.createBuffer(audioData, true/* make mono */);
// Add the buffered data to our object
soundSource.buffer = soundBuffer;
// Plug the cable from one thing to the other
soundSource.connect(context.destination);
// Finally
soundSource.noteOn(context.currentTime);
}
// Stop all of sounds
function stopSounds(){
// How can do this?
}
// Events for audio buttons
document.querySelector('.pre').addEventListener('click',
function () {
stopSounds();
playSound('http://thelab.thingsinjars.com/web-audio-tutorial/hello.mp3');
}
);
document.querySelector('.next').addEventListener('click',
function() {
stopSounds();
playSound('http://thelab.thingsinjars.com/web-audio-tutorial/nokia.mp3');
}
);
You should be pre-loading sounds into buffers once, at launch, and simply resetting the AudioBufferSourceNode whenever you want to play it back.
To play multiple sounds in sequence, you need to schedule them using noteOn(time), one after the other, based on buffer respective lengths.
To stop sounds, use noteOff.
Sounds like you are missing some fundamental web audio concepts. This (and more) is described in detail and shown with samples in this HTML5Rocks tutorial and the FAQ.