Web Audio node connected to two gain nodes, connected to destination, duplicates speed / pitch - google-chrome

As the title says, if I have an audio node that emits sound and I connect it to two separate GainNodes, which in turn are connected to the Audio Context destination, the sound plays at double speed / double pitch (as if half samples are sent to one gain node and half samples to the other, and the time is halved as well).
I have created an handy jsfiddle here, just drag your sound files in the black rectangle canvas and listen.
// audioContext: Web Audio context
// decoded: decoded audioBuffer
// gainNode1, gainNode2: gain nodes
var bSrc = audioContext.createBufferSource();
bSrc.connect (gainNode1);
bSrc.connect (gainNode2);
gainNode1.connect (audioContext.destination);
gainNode2.connect (audioContext.destination);
bSrc.buffer = decoded;
bSrc.loop = false;
// You'll hear two double-speed buffers playing at unison
bSrc.start(0);
Is that by design? What I would like is to exactly "duplicate" the sound (that will be sent to two different routes, the fiddle is just a proof-of-concept for a bigger project).
Edit:
I tested this on Chrome Version 24.0.1312.56 / Ubuntu 12.10 and the behaviour is present.
The behaviour is also present on Chrome Version 24.0.1312.68 / Ubuntu 12.10
On Chrome Version 24.0.1312.57 / Mac OSX, the Audio API works well and this behaviour is not present.
Could it be a Linux-only issue?

Sounds like a Linux implementation issue. It works for me in Chrome on OS X.

Related

Issues with RTSP video in browser using Streamedian

I have a system where I need to be able to view 4 rtsp feeds with a web browser in and amongst other web content. The feeds are coming from "Amcrest ProHD 1080P PTZ Camera Outdoor, 2MP Outdoor Vandal Dome IP PoE Camera" units. All of the connectivity is wired within the same intranet. I'm viewing the cameras on Insignia Amazon Fire TVs, and I'm not getting the results that I expected.
In order to get it working, I installed streamedian on my Windows machine and got that configured properly. I didn't change much about the default configuration of the proxy server except the port number it was using the serve the feeds on. On the browser side, I included the necessary JavaScript files (again, only changing the port numbers) and made my call to load the players. Currently, that code looks like this:
var playerOptions = {
socket: 'ws://192.168.1.253:8080/ws/',
redirectNativeMediaErrors: true,
bufferDuration: 5,
continuousFileLength: 5000,
eventFileLength: 5000,
errorHandler: function (err) {
console.log(err.message);
},
};
setTimeout(function () {
player1 = Streamedian.player("p1", playerOptions);
player2 = Streamedian.player("p2", playerOptions);
player3 = Streamedian.player("p3", playerOptions);
player4 = Streamedian.player("p4", playerOptions);
}, 100);
Each of the players look like this in the HTML:
<video id="p1" controls autoplay>
<source src="rtsp://path.to.camera" type="application/x-rtsp">
</video>
The players do load, and appear to be attempting to start the stream. However, they quickly start buffering and never come out of it to start showing anything resembling a live feed. Any ideas on what I can do to try to get this running? Am I even heading down a valid path to get where I need to be?
I'm not worried about rewind, or anything in the past. This is purely for live streaming and only needs to show "now". Saving anything more than necessary is not required in order to meet the needs.
I've tried dropping the frame rate to 4fps. It didn't seem to help.
The Windows machine has average power at best. I've opened Task Manager while the streams are running, but nothing appears to be pegged out. Am I just running an under-powered machine for what I'm asking it to do?
Are the browsers in the TV, or the TV itself, just not capable of getting it done and processing that much information? I've tried viewing the pages on my laptop as well (separate from the Windows server, and plenty powerful), but the results do not improve.
Should I be trying something else on my server side? I came across iSpyConnect this evening as I was searching for other (preferably free) options, but I can't tell for sure if it will get me what I need. I've also started looking at signboard applications for the TVs, but they all have a cost associated with them that I'd like to avoid if at all possible.
If anyone has any thoughts, ideas, comments, suggestions or questions about what I have in place, please let me know so that I can try to get something up and running within the facility.
Thanks!

Consistent Empty Data using MediaRecorderAPI, intermittently

I have a simple setup for Desktop Capturing using html5 libraries.
This includes a simple webpage and a chrome-extension. I am using
Extension to get the sourceId
Using the sourceId I call navigator.mediaDevices.getUserMedia to get the MediaStream
This MediaStream is then fed into an instance of MediaRecorder for recording.
This setup works most of the times, but a few times I see that requestData() on MediaRecorder instance returns blob with empty data consistently. I am clueless as to what can cause a running setup to start misbehaving sometimes.
Some weird behaviour that I noticed in the bad state:
When I try to close/refresh the window it doesn't respond.
The MediaStreamTrack object in Step 2) above is 'live' but as soon as I go to Step 3) it becomes 'muted'.
There's no pattern to it, sometimes it even happens when I request for the MediaStreams the very first time(which rules out the possibility that there could be some dangling resources eating up the contexts)
Is there anything that I am doing wrong and am unaware of? Any help/pointers would be highly appreciated!

Using Node to stream Video to HTML5

I've been playing around with node and websockets and built a small test app that streams audio using websockets. The server breaks apart the mp3 using createReadStream, throttles the stream using node-throttle and sens the binary data using the "ws" module.
On the client side I pick up the chunks on the websocket and use decodeAudioData (http://www.html5rocks.com/en/tutorials/webaudio/intro/) to decode and play the chunk. It all works relatively ok.
What I was curious to do next was to stream video in the same manner to the HTML5 video tag. But I can't really find any reference material on the web to achieve this in the same manner as my audio test above.
Is there a video equivalent for "decodeAudioData"?
Can I feed chunks of data into a video tag?
I've got a similar sample running that I picked up from...
https://gist.github.com/paolorossi/1993068
But this isn't really what I am looking for. First of all it doesn't really seem to be streaming to me. The client buffers it all before playing it.
Also, similar to my audio test I want the stream to be throttled on the server side so that when a new client connects they join the video at whatever point it is currently at. i.e. 30 minutes in or whatever.
Thanks
OK,
I found a solution to this after much searching.
The MediaSource API is what I was looking for...
var mediaSource = new MediaSource();
var sourceBuffer = mediaSource.addSourceBuffer('video/webm; codecs="vorbis,vp8"');
sourceBuffer.append(new Uint8Array(data));
This link provided the solution...
http://html5-demos.appspot.com/static/media-source.html

Captured audio buffers are all silent on Windows Phone 8

I'm trying to capture audio using WASAPI. My code is largely based on the ChatterBox VoIP sample app. I'm getting audio buffers, but they are all silent (flagged AUDCLNT_BUFFERFLAGS_SILENT).
I'm using Visual Studio Express 2012 for Windows Phone. Running on the emulator.
I had the exact same problem and managed to reproduce it in the ChatterBox sample app if I set Visual Studio to native debugging and at any point stepped through the code.
Also, closing the App without going through the "Stop" procedure and stopping the AudioClient will require you to restart the emulator/device before being able to capture audio data again.
It nearly drove me nuts before I figured out the before mentioned problems but I finally got it working.
So..
1. Be sure to NOT do native debugging
2. Always call IAudioClient->Stop(); before terminating the App.
3. Make sure you pass the correct parameters to IAudioClient->Initialize();
I've included a piece of code that works 100% of the time for me. I've left out error checking for clarity..
LPCWSTR pwstrDefaultCaptureDeviceId =
GetDefaultAudioCaptureId(AudioDeviceRole::Communications);
HRESULT hr = ActivateAudioInterface(pwstrDefaultCaptureDeviceId,
__uuidof(IAudioClient2), (void**)&m_pAudioClient);
hr = m_pAudioClient->GetMixFormat(&m_pwfx);
m_frameSizeInBytes = (m_pwfx->wBitsPerSample / 8) * m_pwfx->nChannels;
hr = m_pAudioClient->Initialize(AUDCLNT_SHAREMODE_SHARED,
AUDCLNT_STREAMFLAGS_NOPERSIST | AUDCLNT_STREAMFLAGS_EVENTCALLBACK,
latency * 10000, 0, m_pwfx, NULL);
hr = m_pAudioClient->SetEventHandle(m_hCaptureEvent);
hr = m_pAudioClient->GetService(__uuidof(IAudioCaptureClient),
(void**)&m_pCaptureClient);
And that's it.. Before calling this code I've started a worker thread that will listen to m_hCaptureEvent and call IAudioCaptureClient->GetBuffer(); whenever the capture event is triggered.
Of course using Microsoft.XNA.Audio.Microphone works fine to, but it's not always an option to reference the XNA framework.. :)
It was a really annoying problem which waste about 2 complete days of mine.My problem was solved by setting AudioClientProperties.eCatagory to AudioCategory_Communications instead of AudioCategory_Other.
After this long try and error period I am not sure that the problem won't repeat in the future because the API doesn't act very stable and every run may return a different result.
Edit:Yeah my guess was true.Restarting the wp emulator makes the buffer silent again.But changing the AudioClientProperties.eCatagory back to AudioCategory_Other again solve it.I still don't know what is wrong with it and what is the final solution.
Again I encounter the same problem and this time commenting (removing) the
properties.eCategory = AudioCategory_Communications;
solve the problem.
I can add my piece of advice for Windows Phone 8.1.
I made the following experiment.
Open capture device. Buffers are not silent.
Open render device with AudioDeviceRole::Communications. Buffers immediately go silent.
Close render device. Buffers are not silent.
Then I opened capture device with AudioDeviceRole::Communications and capture device works fine all the time.
For Windows 10 capture device works all the time, no matter if you open it with AudioDeviceRole::Communications or not.
I've had the same problem. It seems like you can either use only AudioCategory_Other or create an instance of VoipPhoneCall and use only AudioCategory_Communications.
So the solution in my case was to use AudioCategory_Communications and create an outgoing VoipPhoneCall. You should implement the background agents as in Chatterbox VoIP sample app for the VoipCallCoordinator to work .

Using local file for Web Audio API in Javascript

I'm trying to get sound working on my iPhone game using the Web Audio API. The problem is that this app is entirely client side. I want to store my mp3s in a local folder (and without being user input driven) so I can't use XMLHttpRequest to read the data. I was looking into using FileSystem but Safari doesn't support it.
Is there any alternative?
Edit: Thanks for the below responses. Unfortunately the Audio API is horribly slow for games. I had this working and the latency just makes the user experience unacceptable. To clarify, what I need is sounething like -
var request = new XMLHttpRequest();
request.open('GET', 'file:///./../sounds/beep-1.mp3', true);
request.responseType = 'arraybuffer';
request.onload = function() {
context.decodeAudioData(request.response, function(buffer) {
dogBarkingBuffer = buffer;
}, onError);
}
request.send();
But this gives me the errors -
XMLHttpRequest cannot load file:///sounds/beep-1.mp3. Cross origin requests are only supported for HTTP.
Uncaught Error: NETWORK_ERR: XMLHttpRequest Exception 101
I understand the security risks with reading local files but surely within your own domain should be ok?
I had the same problem and I found this very simple solution.
audio_file.onchange = function(){
var files = this.files;
var file = URL.createObjectURL(files[0]);
audio_player.src = file;
audio_player.play();
};
<input id="audio_file" type="file" accept="audio/*" />
<audio id="audio_player" />
You can test here:
http://jsfiddle.net/Tv8Cm/
Ok, it's taken me two days of prototyping different solutions and I've finally figured out how I can do this without storing my resources on a server. There's a few blogs that detail this but I couldn't find the full solution in one place so I'm adding it here. This may be considered a bit hacky by seasoned programmers but it's the only way I can see this working, so if anyone has a more elegent solution I'd love to hear it.
The solution was to store my sound files as a Base64 encoded string. The sound files are relatively small (less than 30kb) so I'm hoping performance won't be too much of an issue. Note that I put 'xxx' in front of some of the hyperlinks as my n00b status means I can't post more than two links.
Step 1: create Base 64 sound font
First I need to convert my mp3 to a Base64 encoded string and store it as JSON. I found a website that does this conversion for me here - xxxhttp://www.mobilefish.com/services/base64/base64.php
You may need to remove return characters using a text editor but for anyone that needs an example I found some piano tones here - xxxhttps://raw.github.com/mudcube/MIDI.js/master/soundfont/acoustic_grand_piano-mp3.js
Note that in order to work with my example you're need to remove the header part data:audio/mpeg;base64,
Step 2: decode sound font to ArrayBuffer
You could implement this yourself but I found an API that does this perfectly (why re-invent the wheel, right?) - https://github.com/danguer/blog-examples/blob/master/js/base64-binary.js
Resource taken from - here
Step 3: Adding the rest of the code
Fairly straightforward
var cNote = acoustic_grand_piano.C2;
var byteArray = Base64Binary.decodeArrayBuffer(cNote);
var context = new webkitAudioContext();
context.decodeAudioData(byteArray, function(buffer) {
var source = context.createBufferSource(); // creates a sound source
source.buffer = buffer;
source.connect(context.destination); // connect the source to the context's destination (the speakers)
source.noteOn(0);
}, function(err) { console.log("err(decodeAudioData): "+err); });
And that's it! I have this working through my desktop version of Chrome and also running on mobile Safari (iOS 6 only of course as Web Audio is not supported in older versions). It takes a couple of seconds to load on mobile Safari (Vs less than 1 second on desktop Chrome) but this might be due to the fact that it spends time downloading the sound fonts. It might also be the fact that iOS prevents any sound playing until a user interaction event has occured. I need to do more work looking at how it performs.
Hope this saves someone else the grief I went through.
Because ios apps are sandboxed, the web view (basically safari wrapped in phonegap) allows you to store your mp3 file locally. I.e, there is no "cross domain" security issue.
This is as of ios6 as previous ios versions didn't support web audio api
Use HTML5 Audio tag for playing audio file in browser.
Ajax request works with http protocol so when you try to get audio file using file://, browser mark this request as cross domain request. Set following code in request header -
header('Access-Control-Allow-Origin: *');