Though I only play audio in response to clicks, I initialize the AudioContext and buffers and such when the script loads.
In mobile Chrome 57.0.2987.132 the console shows the following warning when loaded from an iframe:
An AudioContext in a cross origin iframe must be created or resumed
from a user gesture to enable audio output.
For audio to work I recreate the AudioContext on first click. Is there a way to simply activate the existing AudioContext on the first click? Also can I detect whether the audio is currently blocked?
References:
Chromium issues
Chromium mailing list
The AudioContext.state will tell you if it's "running" or "suspended". If it's "suspended", call AudioContext.resume() from inside a user gesture, and it should start it up for you (without having to recreate state).
Related
I've managed to cast a LAN hosted dashboard page to the chromecast, using https://boombatower.github.io/chromecast-dashboard/sender/.
However, the HTML5 video tag will not play without the muted attribute (and trying to unmute causes it to pause). Here is the error: "Unmuting failed and the element was paused instead because the user didn't interact with the document before". It has to do with Chrome policy, of course.
Is there any way to cause interaction with the Chromecast to perhaps allow unmuting? I've tried to press the pause button on my TV to send an event via CEC (it works on the media controller), but the videos still get muted.
I figured it out. I need to mark the video element using the chromecast api. So I had to create my own Cast app.
There isn't a way to do this with the given dashboard app. It wraps the user's page inside an iframe, and there is just no way to access the chromecast api from inside.
I want to find out where in a webpage's source code does a sound effect play. That'd allow me to better understand the code and obtain the audio file as well. I searched in the "Sources" and "Network" tabs of the Chrome Inspector, but there are no audio files there. The sound is probably fetched by an AJAX request or generated using the HTML5 Web Audio API. How do I set a breakpoint in the Chrome Debugger to pause when a sound plays?
As suggested in Abarnett's comment:
Use a browser add-on/extension/plug-in such as Chrome Audio Capture to record internal sounds in the browser.
HTML5 how to check if user has viewed the entire video? ie, without skipping through. It is straightforward on desktop based browsers, though iOS allows playback control even when you do not explicitly enable it. Therefore, simply listening to the 'ended' event is not accurate.
Try this in iOS. I can only test in Chrome and Firefox:
Use the seeked event
vid.addEventListener("seeked", function() {
}, false);
Additionally, if you only want to detect the user seeking forward, keep track of the playback position by listening to the timeupdate event and reading the currentTime property.
See this JSFiddle
Before I get flamed to death, I know this doesn't work currently due to Apple's concern over downloading an audio file automatically.
However, my question is: Has anyone found a cunning workaround yet? I just want to play a start up sound on the launch of a game and currently have to wait for the user to click somewhere before I can play the audio. One of you clever chaps must have got this working by now?
There is no chance to get autoplay working in mobile browsers. Android and iOS doesn't allow it and personally I think that is a feasible confinement! Imagine every second website you will open plays and ugly sound at start!
But you can make a little hack, so that the user will not remark that he currently started the audio for your application:
You WILL need an user interaction to start your audio. So, your app or game maybe has a startscreen or a welcome button which needs a click to get to mainmenu or start the game. Bind to an user event (the accepted events are: "click", "touchend", "doubleclick" and "keydown") and call the load() method for your <audio>.
Bind to the "canplaythrough" event of the <audio>. This event is triggered when your source is ready to play. Here you can now call play(), pause() or wait for other interactions. So the audio is ready but now you have full controll when to start or stop the sound.
I also advise you to use audio sprites on mobile clients. iOS (and Android?) internally implemented audio support through a Singleton. That means that you cannot, like in desktop browser, have 10 audio elements and play differents sound at once. You can only play one file!
So changing the source for different sounds takes to much time. With an audio sprite you can start your sprites when the user first interact with your website or game. Pause your sprite and when you need to play sound you have to set the currentTime to the beginning of the sprite and pause the sprite when currentTime of your file reaches the end of your sprite. There is an timeupdate Event where your can check the currentTime of your sprite.
If you are more interested I can prepare my javascript audio sprites player for you!!
Only solution I have seen so far is to create a shell app and put the web app inside a UIWebView.
http://flowz.com/2011/03/apple-disabled-audiovideo-playback-in-html5/
UIWebView.allowsInlineMediaPlayback = YES;
UIWebView.mediaPlaybackRequiresUserAction = NO;
I also would really like to know how to do this in a plain-old web app.
I believe I just solved this for my own app.
The steps,
Controller loads up,
Then.... in viewDidLoad
have your web view load the HTML : loadHTMLString:htmlFile baseURL:[self getBasePath]];
THEN... set mediaPlaybackRequiresUserAction = NO on the webview.
If you set it too soon, I think the load for the html resets it.
I have a bot chat app that has voice messages and I needed them to be autoplayed whenever needed ... so here is what worked for me in my angular app :
just attach a click eventlistener to document and call player.load(), after that whenever you set player.src = x; it will autoplay.
<audio id="voicePlayer" controls autoplay playsinline #voicePlayer></audio>
#ViewChild('voicePlayer', { read: ViewContainerRef }) voicePlayerRef: ViewContainerRef;
...
ngAfterContentInit(): void {
this.voicePlayer = this.voicePlayerRef.element.nativeElement;
document.addEventListener('click', this._onDocumentClick.bind(this));
}
_onDocumentClick() {
this.voicePlayer.load();
document.removeEventListener('click', this._onDocumentClick);
}
...
this.voicePlayer.src = 'x';
HowlerJS creates is a workaround. The user doesn't need to allow autoplay for the audio to be played automatically
Explanation from Docs on how this workaround is created :
howler.js is an audio library for the modern web. It defaults to Web Audio API and falls back to HTML5 Audio. This makes working with audio in JavaScript easy and reliable across all platforms.
I am porting a vehicle in-dash display unit app over to the browser. The big goal is to get it running completely within Mobile Safari.
It's an HTML5/JS music app that relies heavily on jQuery.load to move around between different "screens" by loading in page fragments.
The issue is that when a user selects a track to play, they are taken to the "now playing" screen, which should start playback of the track. The track is not playing though once this screen is reached, and the user instead needs to explicitly click play from this screen in order for audio playback to start. Once this has happened for the first time, autoplay works for the duration of the app.
I understand that Mobile Safari intentionally put blockers in place to prevent audio from autoplaying so that web apps are kept from consuming data unless in direct response to user input.
The thing is, my audio playback IS in direct response to user input...sort of. However, a bunch of things need to happen before my playback actually starts, and those things happen within the now playing page (calling API to get URL for track to be played, report user is playing track, get track metadata, yatta yatta yatta).
To try and get around this, I have the app preload a silent .1s mp3 file into the audio element on startup. Then in direct response to a click event to transition to now playing screen, I call .play() on the audio element.
I assumed that having a call to .play in direct response to user input would the subsequent call to .play() within the now playing page to work, since this is the behavior I had previously observed.
Only, it didn't seem to make any difference.
Any ideas on how I can adjust my flow in order for audio playback to start after loading the now playing screen?
EDIT:
Added some code snippets below
Vehicle.audio = {
init: function () {
audioElement.setAttribute("src", "/audioinit.mp3");
audioElement.play();
},
play: function (source) {
log("VEHICLE: playing audio from source " + source);
Vehicle.audio.stop();
audioElement.setAttribute("src", source);
audioElement.play();
},
In response to user selecting a track:
$(document).on("click", "a.play-track", function () {
Vehicle.audio.init();
/* my wrapper function for jQuery.load() */
replace_wrap("nowplaying.html");
});
Then on the now playing page, Vehicle.audio.play() is called.
The thing is, my audio playback IS in direct response to user input...sort of. However, a bunch of things need to happen before my playback actually starts, and those things happen within the now playing page (calling API to get URL for track to be played, report user is playing track, get track metadata, yatta yatta yatta).
Try adding a "touchstart" event listener to the entire "now playing page" which will then synchronously call .play() on the audio element:
$(document).one("touchstart", function () {
Vehicle.audio.init()
})
This way as soon as an iOS user touches anywhere on the now playing page, the audio should begin loading/playing.
AFAIK it is a restriction placed intentionally by iOS. User must interract with the device (touch/click event) before the script is allowed to start playback.