I am working with the HTML5 audio api to play sound. This works fine with regular mp3 files but when using a sound stream such as http://95.173.167.24:8009, it fails to play.
Here is the code i'm using:
if('webkitAudioContext' in window) {
var myAudioContext = new webkitAudioContext();
}
request = new XMLHttpRequest();
request.open('GET', 'http://95.173.167.24:8009', true);
request.responseType = 'arraybuffer';
request.addEventListener('load', bufferSound, false);
request.send();
function bufferSound(event) {
var request = event.target;
var source = myAudioContext.createBufferSource();
source.buffer = myAudioContext.createBuffer(request.response, false);
source.connect(myAudioContext.destination);
source.noteOn(0);
}
Can anyone point me in the right direction on this?
Any help is appreciated.
Thanks
The problem is likely that SHOUTcast is detecting your User-Agent string as a browser. It looks for any string with Mozilla in it, and says "Oh, that's a browser! Send them the admin panel."
You need to force the usage of the audio stream. Fortunately, this is easily done by adding a semicolon at the end of your URL:
http://95.173.167.24:8009/;
Note that the User-Agent string in your logs will be MPEG OVERRIDE.
This will work for most browsers. Some browsers may still not like the HTTP-like resopnses that come from SHOUTcast, but this will at least get you started.
Related
I am currently looking to find a best way to store a incoming webrtc video streams. I am joining the videocall using webrtc (via chrome) and I would like to record every incoming video stream to from each participant to the browser.
The solutions I am researching are:
Intercept network packets coming to the browsers e.g. using Whireshark and then decode. Following this article: https://webrtchacks.com/video_replay/
Modifying a browser to store recording as a file e.g. by modifying Chromium itself
Any screen-recorders or using solutions like xvfb & ffmpeg is not an options due the resources constrains. Is there any other way that could let me capture packets or encoded video as a file? The solution must be working on Linux.
if the media stream is what you want a method is to override the browser's PeerConnection. Here is an example:
In an extension manifest add the following content script:
content_scripts": [
{
"matches": ["http://*/*", "https://*/*"],
"js": ["payload/inject.js"],
"all_frames": true,
"match_about_blank": true,
"run_at": "document_start"
}
]
inject.js
var inject = '('+function() {
//overide the browser's default RTCPeerConnection.
var origPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;
//make sure it is supported
if (origPeerConnection) {
//our own RTCPeerConnection
var newPeerConnection = function(config, constraints) {
console.log('PeerConnection created with config', config);
//proxy the orginal peer connection
var pc = new origPeerConnection(config, constraints);
//store the old addStream
var oldAddStream = pc.addStream;
//addStream is called when a local stream is added.
//arguments[0] is a local media stream
pc.addStream = function() {
console.log("our add stream called!")
//our mediaStream object
console.dir(arguments[0])
return oldAddStream.apply(this, arguments);
}
//ontrack is called when a remote track is added.
//the media stream(s) are located in event.streams
pc.ontrack = function(event) {
console.log("ontrack got a track")
console.dir(event);
}
window.ourPC = pc;
return pc;
};
['RTCPeerConnection', 'webkitRTCPeerConnection', 'mozRTCPeerConnection'].forEach(function(obj) {
// Override objects if they exist in the window object
if (window.hasOwnProperty(obj)) {
window[obj] = newPeerConnection;
// Copy the static methods
Object.keys(origPeerConnection).forEach(function(x){
window[obj][x] = origPeerConnection[x];
})
window[obj].prototype = origPeerConnection.prototype;
}
});
}
}+')();';
var script = document.createElement('script');
script.textContent = inject;
(document.head||document.documentElement).appendChild(script);
script.parentNode.removeChild(script);
I tested this with a voice call in google hangouts and saw that two mediaStreams where added via pc.addStream and one track was added via pc.ontrack. addStream would seem to be local media streams and the event object in ontrack is a RTCTrackEvent which has a streams object. which I assume are what you are looking for.
To access these streams from your extenion's content script you will need to create audio elements and set the "srcObject" property to the media stream: e.g.
pc.ontrack = function(event) {
//check if our element exists
var elm = document.getElementById("remoteStream");
if(elm == null) {
//create an audio element
elm = document.createElement("audio");
elm.id = "remoteStream";
}
//set the srcObject to our stream. not sure if you need to clone it
elm.srcObject = event.streams[0].clone();
//write the elment to the body
document.body.appendChild(elm);
//fire a custom event so our content script knows the stream is available.
// you could pass the id in the "detail" object. for example:
//CustomEvent("remoteStreamAdded", {"detail":{"id":"audio_element_id"}})
//then access if via e.detail.id in your event listener.
var e = CustomEvent("remoteStreamAdded");
window.dispatchEvent(e);
}
Then in your content script you can listen for that event/access the mediastream like so:
window.addEventListener("remoteStreamAdded", function(e) {
elm = document.getElementById("remoteStream");
var stream = elm.captureStream();
})
With the capture stream available to your content script you can do pretty much anything you want with it. For example, MediaRecorder works really well for recording the stream(s) or you could use something like peer.js or maybe binary.js to stream to another source.
I haven't tested this but it should also be possible to override the local streams. For example, in the inject.js you could establish some blank mediastream, override navigator.mediaDevices.getUserMedia and instead of returning the local mediastream return your own mediastream.
This method should work in firefox and maybe others as well assuming you use an extenion/app to load the inject.js script at the start of the document. It being loaded before any of the target's libs is key to making this work.
edited for more detail
edited for even more detail
Capturing packets will only give you the network packets which you would then need to turn into frames and put into a container. A server such as Janus can record videos.
Running headless chrome and using the javascript MediaRecorder API is another option but much more heavy on resources.
Using the following code I get all zeroes in the audio stream from my microphone (using Chrome):
navigator.mediaDevices.getUserMedia({audio:true}).then(
function(stream) {
var audioContext = new AudioContext();
var source = audioContext.createMediaStreamSource(stream);
var node = audioContext.createScriptProcessor(8192, 1, 1);
source.connect(node);
node.connect(audioContext.destination);
node.onaudioprocess = function (e) {
console.log("Audio:", e.inputBuffer.getChannelData(0));
};
}).catch(function(error) {console.error(error);})
I created a jsfiddle here: https://jsfiddle.net/g3dck4dr/
What's wrong here?
Umm, something in your hardware config is wrong? The fiddle works fine for me (that is, it shows non-zero values). Do other web audio input tests work, like https://webaudiodemos.appspot.com/input/index.html?
Test to make sure you've selected the right input, and you don't have a hardware mute switch on.
I've learnt a lot in the last 48 hours about cross domain policies, but apparently not enough.
Following on from this question. My HTML5 game supports Facebook login. I'm trying to download profile pictures of people's friends. In the HTML5 version of my game I get the following error in Chrome.
detailMessage: "com.google.gwt.core.client.JavaScriptException:
(SecurityError) ↵ stack: Error: Failed to execute 'texImage2D' on
'WebGLRenderingContext': Tainted canvases may not be loaded.
As I understand it, this error occurs because I'm trying to load an image from a different domain, but this can be worked around with an Access-Control-Allow-Origin header, as detailed in this question.
The URL I'm trying to download from is
https://graph.facebook.com/1387819034852828/picture?width=150&height=150
Looking at the network tab in Chrome I can see this has the required access-control-allow-origin header and responds with a 302 redirect to a new URL. That URL varies, I guess depending on load balancing, but here's an example URL.
https://fbcdn-profile-a.akamaihd.net/hprofile-ak-xap1/v/t1.0-1/c0.0.160.160/p160x160/11046398_1413754142259317_606640341449680402_n.jpg?oh=6738b578bc134ff207679c832ecd5fe5&oe=562F72A4&gda=1445979187_2b0bf0ad3272047d64c7bfc2dbc09a29
This URL also has the access-control-allow-origin header. So I don't understand why this is failing.
Being Facebook, and the fact that thousands of apps, games and websites display users profile pictures, I'm assuming this is possible. I'm aware that I can bounce through my own server, but I'm not sure why I should have to.
Answer
I eventually got cross domain image loading working in libgdx with the following code (which is pretty hacky and I'm sure can be improved). I've not managed to get it working with the AssetDownloader yet. I'll hopefully work that out eventually.
public void downloadPixmap(final String url, final DownloadPixmapResponse response) {
final RootPanel root = RootPanel.get("embed-html");
final Image img = new Image(url);
img.getElement().setAttribute("crossOrigin", "anonymous");
img.addLoadHandler(new LoadHandler() {
#Override
public void onLoad(LoadEvent event) {
HtmlLauncher.application.getPreloader().images.put(url, ImageElement.as(img.getElement()));
response.downloadComplete(new Pixmap(Gdx.files.internal(url)));
root.remove(img);
}
});
root.add(img);
}
interface DownloadPixmapResponse {
void downloadComplete(Pixmap pixmap);
void downloadFailed(Throwable e);
}
are you setting the crossOrigin attribute on your img before requesting it?
var img = new Image();
img.crossOrigin = "anonymous";
img.src = "https://graph.facebook.com/1387819034852828/picture?width=150&height=150";
It's was working for me when this question was asked. Unfortunately the URL above no longer points to anything so I've changed it in the example below
var img = new Image();
img.crossOrigin = "anonymous"; // COMMENT OUT TO SEE IT FAIL
img.onload = uploadTex;
img.src = "https://i.imgur.com/ZKMnXce.png";
function uploadTex() {
var gl = document.createElement("canvas").getContext("webgl");
var tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
try {
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img);
log("DONE: ", gl.getError());
} catch (e) {
log("FAILED to use image because of security:", e);
}
}
function log() {
var div = document.createElement("div");
div.innerHTML = Array.prototype.join.call(arguments, " ");
document.body.appendChild(div);
}
<body></body>
How to check you're receiving the headers
Open your devtools, pick the network tab, reload the page, select the image in question, look at both the REQUEST headers and the RESPONSE headers.
The request should show your browser sent an Origin: header
The response should show you received
Access-Control-Allow-Methods: GET, OPTIONS, ...
Access-Control-Allow-Origin: *
Note, both the response AND THE REQUEST must show the entries above. If the request is missing Origin: then you didn't set img.crossOrigin and the browser will not let you use the image even if the response said it was ok.
If your request has the Origin: header and the response does not have the other headers than that server did not give permission to use the image to display it. In other words it will work in an image tag and you can draw it to a canvas but you can't use it in WebGL and any 2d canvas you draw it into will become tainted and toDataURL and getImageData will stop working
this is a classic crossdomain issue that happens when you're developing locally.
I use python's simple server as a quick fix for this.
navigate to your directory in the terminal, then type:
$ python -m SimpleHTTPServer
and you'll get
Serving HTTP on 0.0.0.0 port 8000 ...
so go to 0.0.0.0:8000/ and you should see the problem resolved.
You can base64 encode your texture.
As discussed in a previous question, I have built a prototype (using MVC Web API, NAudio and NAudio.Lame) that is streaming live low quality audio after converting it to mp3. The source stream is PCM: 8K, 16-bit, mono and I'm making use of html5's audio tag.
On both Chrome and IE11 there is a 15-34 second delay (high-latency) before audio is heard from the browser which, I'm told, is unacceptable for our end users. Ideally the latency would be no more than 5 seconds. The delay occurs even when using the preload="none" attribute within my audio tag.
Looking more closely at the issue, it appears as though both browsers will not start playing audio until they have received ~32K of audio data. With that in mind, I can affect the delay by changing Lame's MP3 'bitrate' setting. However, if I reduce the delay (by sending more data to the browser for the same length of audio), I will introduce audio drop-outs later.
Examples:
If I use Lame's V0 encoding the delay is nearly 34 seconds which requires almost 0.5 MB of source audio.
If I use Lame's ABR_32 encoding, I can reduce the delay to 10-15 seconds but I will experience pauses and drop-outs throughout the listening session.
Questions:
Any ideas how I can minimize the start-up delay (latency)?
Should I continue investigating various Lame 'presets' in hopes of picking the "right" one?
Could it be that MP3 is not the best format for live streaming?
Would switching to Ogg/Vorbis (or Ogg/OPUS) help?
Do we need to abandon HTML5's audio tag and use Flash or a java applet?
Thanks.
You can not reduce the delay, since you have no control on the browser code and buffering size. HTML5 specification does not enforce any constraint, so I don't see any reason why it would improve.
You can however implement a solution with webaudio API (it's quite simple), where you handle streaming yourself.
If you can split your MP3's chunk in fixed size (so that each MP3 chunks size is known beforehand, or at least, at receive time), then you can have a live streaming in 20 lines of code. The chunk size will be your latency.
The key is to use AudioContext::decodeAudioData.
// Fix up prefixing
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var offset = 0;
var byteOffset = 0;
var minDecodeSize = 16384; // This is your chunk size
var request = new XMLHttpRequest();
request.onprogress = function(evt)
{
if (request.response)
{
var size = request.response.length - byteOffset;
if (size < minDecodeSize) return;
// In Chrome, XHR stream mode gives text, not ArrayBuffer.
// If in Firefox, you can get an ArrayBuffer as is
var buf;
if (request.response instanceof ArrayBuffer)
buf = request.response;
else
{
ab = new ArrayBuffer(size);
buf = new Uint8Array(ab);
for (var i = 0; i < size; i++)
buf[i] = request.response.charCodeAt(i + byteOffset) & 0xff;
}
byteOffset = request.response.length;
context.decodeAudioData(ab, function(buffer) {
playSound(buffer);
}, onError);
}
};
request.open('GET', url, true);
request.responseType = expectedType; // 'stream' in chrome, 'moz-chunked-arraybuffer' in firefox, 'ms-stream' in IE
request.overrideMimeType('text/plain; charset=x-user-defined');
request.send(null);
function playSound(buffer) {
var source = context.createBufferSource(); // creates a sound source
source.buffer = buffer; // tell the source which sound to play
source.connect(context.destination); // connect the source to the context's destination (the speakers)
source.start(offset); // play the source now
// note: on older systems, may have to use deprecated noteOn(time);
offset += buffer.duration;
}
I want to write a basic script for HTML5 Web Audio API, can play some audio files. But I don't know how to unload a playing audio and load another one. In my script two audio files are playing in the same time,but not what I wanted.
Here is my code:
var context,
soundSource,
soundBuffer;
// Step 1 - Initialise the Audio Context
context = new webkitAudioContext();
// Step 2: Load our Sound using XHR
function playSound(url) {
// Note: this loads asynchronously
var request = new XMLHttpRequest();
request.open("GET", url, true);
request.responseType = "arraybuffer";
// Our asynchronous callback
request.onload = function() {
var audioData = request.response;
audioGraph(audioData);
};
request.send();
}
// This is the code we are interested in
function audioGraph(audioData) {
// create a sound source
soundSource = context.createBufferSource();
// The Audio Context handles creating source buffers from raw binary
soundBuffer = context.createBuffer(audioData, true/* make mono */);
// Add the buffered data to our object
soundSource.buffer = soundBuffer;
// Plug the cable from one thing to the other
soundSource.connect(context.destination);
// Finally
soundSource.noteOn(context.currentTime);
}
// Stop all of sounds
function stopSounds(){
// How can do this?
}
// Events for audio buttons
document.querySelector('.pre').addEventListener('click',
function () {
stopSounds();
playSound('http://thelab.thingsinjars.com/web-audio-tutorial/hello.mp3');
}
);
document.querySelector('.next').addEventListener('click',
function() {
stopSounds();
playSound('http://thelab.thingsinjars.com/web-audio-tutorial/nokia.mp3');
}
);
You should be pre-loading sounds into buffers once, at launch, and simply resetting the AudioBufferSourceNode whenever you want to play it back.
To play multiple sounds in sequence, you need to schedule them using noteOn(time), one after the other, based on buffer respective lengths.
To stop sounds, use noteOff.
Sounds like you are missing some fundamental web audio concepts. This (and more) is described in detail and shown with samples in this HTML5Rocks tutorial and the FAQ.