Firefox 25 and AudioContext createJavaScriptNote not a function - html

Firefox 25 says to bring Web Audio but an important functions seems to be missing - createJavaScriptNode.
I'm trying to build a analyser but I get the error in console that createJavaScriptNode is not a function.
Demo - http://jsbin.com/olugOri/3/edit

You can try using createScriptProcessor instead. Firefox is still not getting correct values, but at least that error is no longer present.
Demo - http://jsbin.com/olugOri/4/edit
Edit: (more visibility for the important discussion in the comments)
Firefox does support MediaElementSource if the media adheres to the Same-Origin Policy, however there is no error produced by Firefox when attempting to use media from a remote origin.
The specification is not really specific about it (pun intended), but I've been told that this is an intended behavior, and the issue is actually with Chrome… It's the Blink implementations (Chrome, Opera) that need to be updated to require CORS.
MediaElementSource Node and Cross-Origin Media Resources:
From: Robert O'Callahan <robert#ocallahan.org>
Date: Tue, 23 Jul 2013 16:30:00 +1200
To: "public-audio#w3.org" <public-audio#w3.org>
HTML media elements can play media resources from any origin. When an
element plays a media resource from an origin different from the page's
origin, we must prevent page script from being able to read the contents of
the media (e.g. extract video frames or audio samples). In particular we
should prevent ScriptProcessorNodes from getting access to the media's
audio samples. We should also information about samples leaking in other
ways (e.g. timing channel attacks). Currently the Web Audio spec says
nothing about this.
I think we should solve this by preventing any non-same-origin data from
entering Web Audio. That will minimize the attack surface and the impact on
Web Audio.
My proposal is to make MediaElementAudioSourceNode convert data coming from
a non-same origin stream to silence.
If this proposal makes it into spec it will be nearly impossible for a developer to even realize why his MediaElementSource is not working. As it stands right now, calling createMediaElementSource() on an <audio> element in Firefox 26 actually stops the <audio> controls from working at all and throws no errors.
What dangerous things can you do with the audio/video data from a remote origin? The general idea is that without applying the Same-Origin Policy to a MediaElementSource node, some malicious javascript could access media that only the user should have access to (session, vpn, local server, network drives) and send its contents—or some representation of it—to an attacker.
The HTML5 media elements don't have these restrictions by default. You can include remote media across all browsers by using the <audio>, <img>, or <video> elements. It's only when you want to manipulate or extract the data from these remote resources that the Same-Origin Policy comes into play.
[It's] for the same reason that you cannot dump image data cross-origin via <canvas>: media may contain sensitive information and therefore allowing rogue sites to dump and re-route content is a security issue. - #nmaier

Related

How do I stream a video of initially unknown size to the HTML5 video tag?

I am attempting to write a web service (using C# and WebAPI, thought the actual server technology likely isn't important) that hosts dynamic video files for consumption by a basic HTML5 video element.
Here is my situation and my requirements:
The video is being transcoded on the fly and so the final file size is not known. I cannot provide a Content-Length. However, the time duration of the video is known in advance.
The video must be immediately playable when the page loads, even if it's not done transcoding.
The user must be able, via the HTML5 video controls, to seek backwards and forwards in the video. If they attempt to seek forward to a point that is not transcoded, the request will block until the transcoding catches up.
I tried to use a 206: Partial Content response with the Content-Range header but since I am unable to provide a content-length, the player in Chrome seems unable to seek beyond the first chunk of video it gets and the player in Firefox doesn't even attempt to download more than the first chunk. It is also invalid to respond with a range outside what is asked for by the client but the video player always asks for bytes 0+.
Without a content-length, I considered using Transfer-Encoding: chunked and chunking the output. However, Chrome does not let you seek through a video if the server does not support ranged requests.
I have also considered specifying a fake content length of 1TB or something ridiculous just so I can do the ranged response option but I do not know if that will affect the progress bar or seek capabilities. Does the HTML5 video player determine the progress bar's dimensions based on file size or duration?
So what are my options? I am sure this is a problem that's been solved before. Can ranged responses be used in conjunction with chunked encoding?

Does web based radio and audio streaming services use the Web Audio API for playback?

I'm trying to figure out if web based audio streaming sites use the Web Audio API for playback or if they rely on the audio element or something else.
Since the user of an audio streaming service typically doesn't need more functionality than starting and stopping the audio, then I guess that the audio element is enough. If a VU-meter is required then I would guess the Web Audio API would be used since it has an built in analyser node. But since IE doesn't support the API then I suppose you'd rather use the audio element and reach the IE users than using fancy extras such as an VU-meter.
I've been looking at the source code for Spotifys web player, Grooveshark, BBC radio and the Polish public radio but I find neither audio elements or use of the Web Audio API. I did find that the Swedish public radio (sr.se) makes use of the audio element though.
I'm not asking for anyone to go through the JavaScript source code for me, but rather if someone who is familiar with the subject could point me in the right direction.
I don't know of any internet radio services playing back their streams with the Web Audio API currently, but I wouldn't be surprised to find one. I've been working on one myself using Audiocog's excellent Aurora.js library, which enables codecs in-browser that wouldn't normally be available, by decoding the audio with JavaScript. However, for compatibility reasons as you have pointed out, this would be considered a bit experimental today.
Most internet radio stations use progressive HTTP streaming (SHOUTcast/Icecast style) which can be played back within an <audio> element or Flash. This works well but can be hard to get right, especially if you use SHOUTcast servers as they are not quite 100% compatible with HTTP, hurting browser support in some versions of Firefox and a lot of mobile browsers. I ended up writing my own server called AudioPump Server to get better browser and mobile browser support with HTTP progressive.
Depending on your Flash code and ActionScript version available, you might also have to deal with memory leaks in creative ways, since by default Flash will keep all of your stream data in memory indefinitely as it was never built to stream over HTTP. Many use RTMP with Flash (with Wowza or similar on the server), which Flash was built to stream with to get around this problem.
iOS supports HLS which is basically a collection of static files served by an HTTP server. The encoder writes a chunk of the stream to each file as the encoding occurs, and the client just downloads them and plays them back seamlessly. The benefit here is that the client can choose a bitrate to stream and, raising quality up and down as network conditions change. This also means that you can completely switch networks (say from WiFi to 3G) and still maintain the stream since chunks are downloaded independently and statelessly. Android "supports" HLS, but it is buggy. Safari is the only browser currently supporting HLS.
Compatibility detection is not something you need to solve yourself. There are many players, such as jPlayer and JW Player which wrangle HTML5 audio support detection, codec support detection, and provide a common API between playback for HTML5 audio and Flash. They also provide an optional UI if you want to get up and running quickly.
Finally, most stations do offer a link to allow you to play the stream in your own media player. This is done by linking to a playlist file (usually M3U or PLS) which is downloaded and often immediately opened (as configured by the user and their browser). The player software loads this playlist and then connects directly to the streaming server to begin playback. On Android, you simply link to the stream URL. It will detect the Content-Type response header, disconnect, and open its configured media player for playback. These days you have to hunt to find these direct links, but they are out there.
If you ever want to know what a station is using without digging around in their compiled and minified source code, simply use a tool like Fiddler or Wireshark and watch the traffic. You will find that it is very straightforward under the hood.
We use Web Audio for streaming via Aurora.js using a protocol very similar to HTTP Live Streaming. We did this because we wanted the same streaming backend to serve iPhone, Android and the web.
It was all a very long and painful process that took over 6 months of effort, but now that its all finished, its all good.
Have a look at http://radioflote.com and feel free to shoot questions or clarifications regarding anything. Go ahead and disassemble the code if you want to. Not a problem.

How to do cross browser/device Audio capture

I would like to clarify certain things with what I found and raise certain questions with things that I dont know,
Capturing cam/mic through browser could be done through getusermedia();
Is there anything for i devices? because getusermedia() doesn't seem to work in i devices
How could I trap actual audio from web browser application (eg. if I play an audio file and forward it 2mins, I would like to capture the actual audio stream from the html5 player so that I hold the actual audio data)
You need to use Flash, if you are not going to support mobile devices. One best solution is to use wami-recorder.
From their website:
The Problem
As of this writing, most browsers still do not support WebRTC's getUserMedia(), which promises to give web developers microphone access via Javascript. This project achieves the next best thing for browsers that support Flash. Using the WAMI recorder, you can collect audio on your server without installing any proprietary media server software.
The Solution
The WAMI recorder uses a light-weight Flash app to ship audio from client to server via a standard HTTP POST. Apart from the security settings to allow microphone access, the entire interface can be constructed in HTML and Javascript.
Hope this helps.

Pure HTML5 videoconference

I want to make a single web application avoiding any flash code. This application must contain videoconference, and I want to implement it in pure HTML5. It is possible? I know about websockets, but don't know really if videoconference can be implemented through them with a relative performance (at least, 24fps + sound at a right resolution, minimum 640x480), and both endpoints being web apps (both endpoints should use browser).
Thanks in advance
Anyone following up this question - on Feb 4th, 2013 they produced the solution with WEBRTC in Chrome and Firefox. For examples see https://hacks.mozilla.org/2013/02/hello-chrome-its-firefox-calling/ or http://www.html5rocks.com/en/tutorials/webrtc/basics/ or https://code.google.com/p/sipservlets/wiki/HTML5WebRTCVideoApplication
You can't really use HTML5 video for live streaming at the moment, and it doesn't have support for web cams yet.
Ericsson has modyfied a WebKit browser and is showing how this can be done with hopfully upcoming HTML5 Stream API. See Beyond HTML5 - Implementing and stream management in WebKit
It is impossible to capture web-cam images/microphone feed just through JavaScript (although there are plug-ins which let you handle output through flash), so it would be necessary for you to have some kind of application/plug-in installed.
The speed part is just for the client to worry. I mean, web sockets will be as fast as the connection permits.
You should do some research about web workers, since they would be very useful for speeding up your application (you could have microphone interface, web-cam interface and UI all with their particular worker, thus never blocking the application or rendering it unresponsive).
EDIT: the aforementioned jQuery plug-in works through the use of <canvas>.
As Jonas said, according to the situations now, we can't build video conference with HTML5. There are many limitations with browsers also. As there is no common video codec supported by all browsers. And live-streaming is also properly supported by safari only(using HTML5 video tag). As per my experience we can't build live-streaming on windows with any browser properly.
But if you wanna get some information about live streaming see https://developer.apple.com/library/archive/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html
you can use this source to test your live-streaming examples
"http://xfunoonx.api.channel.livestream.com/3.0/playlist.m3u8"
This content will work only with safari on Mac.

Does HTML5 change anything if I don't use video or audio?

I keep hearing all about HTML5 and how great it is, but if I don't really care about audio and video, is there anything it really changes for me? I've read up on the new tags it supports and they just don't seem to be all that revolutionary beyond its video and audio capabilities.
Shamelessly copy-pasted from wikipedia:
The canvas element for immediate mode 2D drawing.
Timed media playback (possibly not interesting for you)
Offline storage database (offline web applications). See Web Storage[21]
Document editing (via DOM API and user interface)
Drag-and-drop
Cross-document messaging
Browser history management
MIME type and protocol handler registration.
Microdata
Browser-based SQL databases
Oh, and WebSockets to replace AJAX and Comet.
http://en.wikipedia.org/wiki/HTML5#New_APIs
DOM storage
Canvas
Drag'n'drop
Semantic microformats support
One of the biggest deals about HTML5 which is under reported is that HTML4 never defined error handing, but this is well defined in HTML5. All browser vendors are building HTML5 parsers that conform to this spec. While this is not sexy, the end result will be that browsers will become more interoperable with each other (especially in cases where the author makes an error). In the long run this should mean you'll spend less time trying to get all browsers to work correctly, and users will benefit from less broken sites in their browser of choice.
HTML5 also allows you to make more application quality sites, using many of the technologies mentioned in the other answers. Opera Dragonfly (the project I'm involved with) is a complex web app which doesn't use audio or video but takes advantage of a large number of HTML5 technologies. We use AppCache to make sure it still works when you are offline, Web Storage to save user preferences and history (we can store a lot more information than cookies allowed) and will likely use Web Workers to allow the app to use more than one process at once (will speed up performance on mult-core machines).
If you are doing anything with graphics then the Canvas API gives you a lot of drawing options, while SVG (an open vector format) can be used within your HTML pages now. Previously pages had to be served as XML for SVG to be included inside them.