Getting the list of voices in speechSynthesis (Web Speech API) - dom-events

Following HTML shows empty array in console on first click:
<!DOCTYPE html>
<html>
<head>
<script>
function test(){
console.log(window.speechSynthesis.getVoices())
}
</script>
</head>
<body>
Test
</body>
</html>
In second click you will get the expected list.
If you add onload event to call this function (<body onload="test()">), then you can get correct result on first click. Note that the first call on onload still doesn't work properly. It returns empty on page load but works afterward.
Questions:
Since it might be a bug in beta version, I gave up on "Why" questions.
Now, the question is if you want to access window.speechSynthesis on page load:
What is the best hack for this issue?
How can you make sure it will load speechSynthesis, on page load?
Background and tests:
I was testing the new features in Web Speech API, then I got to this problem in my code:
<script type="text/javascript">
$(document).ready(function(){
// Browser support messages. (You might need Chrome 33.0 Beta)
if (!('speechSynthesis' in window)) {
alert("You don't have speechSynthesis");
}
var voices = window.speechSynthesis.getVoices();
console.log(voices) // []
$("#test").on('click', function(){
var voices = window.speechSynthesis.getVoices();
console.log(voices); // [SpeechSynthesisVoice, ...]
});
});
</script>
<a id="test" href="#">click here if 'ready()' didn't work</a>
My question was: why does window.speechSynthesis.getVoices() return empty array, after page is loaded and onready function is triggered? As you can see if you click on the link, same function returns an array of available voices of Chrome by onclick triger?
It seems Chrome loads window.speechSynthesis after the page load!
The problem is not in ready event. If I remove the line var voice=... from ready function, for first click it shows empty list in console. But the second click works fine.
It seems window.speechSynthesis needs more time to load after first call. You need to call it twice! But also, you need to wait and let it load before second call on window.speechSynthesis. For example, following code shows two empty arrays in console if you run it for first time:
// First speechSynthesis call
var voices = window.speechSynthesis.getVoices();
console.log(voices);
// Second speechSynthesis call
voices = window.speechSynthesis.getVoices();
console.log(voices);

According to Web Speech API Errata (E11 2013-10-17), the voice list is loaded async to the page. An onvoiceschanged event is fired when they are loaded.
voiceschanged: Fired when the contents of the SpeechSynthesisVoiceList, that the getVoices method will return, have changed. Examples include: server-side synthesis where the list is determined asynchronously, or when client-side voices are installed/uninstalled.
So, the trick is to set your voice from the callback for that event listener:
// wait on voices to be loaded before fetching list
window.speechSynthesis.onvoiceschanged = function() {
window.speechSynthesis.getVoices();
...
};

You can use a setInterval to wait until the voices are loaded before using them however you need and then clearing the setInterval:
var timer = setInterval(function() {
var voices = speechSynthesis.getVoices();
console.log(voices);
if (voices.length !== 0) {
var msg = new SpeechSynthesisUtterance(/*some string here*/);
msg.voice = voices[/*some number here to choose from array*/];
speechSynthesis.speak(msg);
clearInterval(timer);
}
}, 200);
$("#test").on('click', timer);

After studying the behavior on Google Chrome and Firefox, this is what can get all voices:
Since it involves something asynchronous, it might be best done with a promise:
const allVoicesObtained = new Promise(function(resolve, reject) {
let voices = window.speechSynthesis.getVoices();
if (voices.length !== 0) {
resolve(voices);
} else {
window.speechSynthesis.addEventListener("voiceschanged", function() {
voices = window.speechSynthesis.getVoices();
resolve(voices);
});
}
});
allVoicesObtained.then(voices => console.log("All voices:", voices));
Note:
When the event voiceschanged fires, we need to call .getVoices() again. The original array won't be populated with content.
On Google Chrome, we don't have to call getVoices() initially. We only need to listen on the event, and it will then happen. On Firefox, listening is not enough, you have to call getVoices() and then listen on the event voiceschanged, and set the array using getVoices() once you get notified.
Using a promise makes the code more clean. Everything related to getting voices are in this promise code. If you don't use a promise but instead put this code in your speech routine, it is quite messy.
You can write a voiceObtained promise to resolve to a voice you want, and then your function to say something can just do: voiceObtained.then(voice => { }) and inside that handler, call the window.speechSynthesis.speak() to speak something. Or you can even write a promise speechReady("hello world").then(speech => { window.speechSynthesis.speak(speech) }) to say something.

heres the answer
function synthVoice(text) {
const awaitVoices = new Promise(resolve=>
window.speechSynthesis.onvoiceschanged = resolve)
.then(()=> {
const synth = window.speechSynthesis;
var voices = synth.getVoices();
console.log(voices)
const utterance = new SpeechSynthesisUtterance();
utterance.voice = voices[3];
utterance.text = text;
synth.speak(utterance);
});
}

At first i used onvoiceschanged , but it kept firing even after the voices was loaded, so my goal was to avoid onvoiceschanged at all cost.
This is what i came up with. It seems to work so far, will update if it breaks.
loadVoicesWhenAvailable();
function loadVoicesWhenAvailable() {
voices = synth.getVoices();
if (voices.length !== 0) {
console.log("start loading voices");
LoadVoices();
}
else {
setTimeout(function () { loadVoicesWhenAvailable(); }, 10)
}
}

setInterval solution by Salman Oskooi was perfect
Please see https://jsfiddle.net/exrx8e1y/
function myFunction() {
dtlarea=document.getElementById("details");
//dtlarea.style.display="none";
dtltxt="";
var mytimer = setInterval(function() {
var voices = speechSynthesis.getVoices();
//console.log(voices);
if (voices.length !== 0) {
var msg = new SpeechSynthesisUtterance();
msg.rate = document.getElementById("rate").value; // 0.1 to 10
msg.pitch = document.getElementById("pitch").value; //0 to 2
msg.volume = document.getElementById("volume").value; // 0 to 1
msg.text = document.getElementById("sampletext").value;
msg.lang = document.getElementById("lang").value; //'hi-IN';
for(var i=0;i<voices.length;i++){
dtltxt+=voices[i].lang+' '+voices[i].name+'\n';
if(voices[i].lang==msg.lang) {
msg.voice = voices[i]; // Note: some voices don't support altering params
msg.voiceURI = voices[i].voiceURI;
// break;
}
}
msg.onend = function(e) {
console.log('Finished in ' + event.elapsedTime + ' seconds.');
dtlarea.value=dtltxt;
};
speechSynthesis.speak(msg);
clearInterval(mytimer);
}
}, 1000);
}
This works fine on Chrome for MAC, Linux(Ubuntu), Windows and Android
Android has non-standard en_GB wile others have en-GB as language code
Also you will see that same language(lang) has multiple names
On Mac Chrome you get en-GB Daniel besides en-GB Google UK English Female and n-GB Google UK English Male
en-GB Daniel (Mac and iOS)
en-GB Google UK English Female
en-GB Google UK English Male
en_GB English United Kingdom
hi-IN Google हिन्दी
hi-IN Lekha (Mac and iOS)
hi_IN Hindi India

Another way to ensure voices are loaded before you need them is to bind their loading state to a promise, and then dispatch your speech commands from a then:
const awaitVoices = new Promise(done => speechSynthesis.onvoiceschanged = done);
function listVoices() {
awaitVoices.then(()=> {
let voices = speechSynthesis.getVoices();
console.log(voices);
});
}
When you call listVoices, it will either wait for the voices to load first, or dispatch your operation on the next tick.

I used this code to load voices successfully:
<select id="voices"></select>
...
function loadVoices() {
populateVoiceList();
if (speechSynthesis.onvoiceschanged !== undefined) {
speechSynthesis.onvoiceschanged = populateVoiceList;
}
}
function populateVoiceList() {
var allVoices = speechSynthesis.getVoices();
allVoices.forEach(function(voice, index) {
var option = $('<option>').val(index).html(voice.name).prop("selected", voice.default);
$('#voices').append(option);
});
if (allVoices.length > 0 && speechSynthesis.onvoiceschanged !== undefined) {
// unregister event listener (it is fired multiple times)
speechSynthesis.onvoiceschanged = null;
}
}
I found the 'onvoiceschanged' code from this article: https://hacks.mozilla.org/2016/01/firefox-and-the-web-speech-api/
Note: requires JQuery.
Works in Firefox/Safari and Chrome (and in Google Apps Script too - but only in the HTML).

async function speak(txt) {
await initVoices();
const u = new SpeechSynthesisUtterance(txt);
u.voice = speechSynthesis.getVoices()[3];
speechSynthesis.speak(u);
}
function initVoices() {
return new Promise(function (res, rej){
speechSynthesis.getVoices();
if (window.speechSynthesis.onvoiceschanged) {
res();
} else {
window.speechSynthesis.onvoiceschanged = () => res();
}
});
}

While the accepted answer works great but if you're using SPA and not loading full-page, on navigating between links, the voices will not be available.
This will run on a full-page load
window.speechSynthesis.onvoiceschanged
For SPA, it wouldn't run.
You can check if it's undefined, run it, or else, get it from the window object.
An example that works:
let voices = [];
if(window.speechSynthesis.onvoiceschanged == undefined){
window.speechSynthesis.onvoiceschanged = () => {
voices = window.speechSynthesis.getVoices();
}
}else{
voices = window.speechSynthesis.getVoices();
}
// console.log("voices", voices);

I had to do my own research for this to make sure I understood it properly, so just sharing (feel free to edit).
My goal is to:
Get a list of voices available on my device
Populate a select element with those voices (after a particular page loads)
Use easy to understand code
The basic functionality is demonstrated in MDN's official live demo of:
https://github.com/mdn/web-speech-api/tree/master/speak-easy-synthesis
but I wanted to understand it better.
To break the topic down...
SpeechSynthesis
The SpeechSynthesis interface of the Web Speech API is the controller
interface for the speech service; this can be used to retrieve
information about the synthesis voices available on the device, start
and pause speech, and other commands besides.
Source
onvoiceschanged
The onvoiceschanged property of the SpeechSynthesis interface
represents an event handler that will run when the list of
SpeechSynthesisVoice objects that would be returned by the
SpeechSynthesis.getVoices() method has changed (when the voiceschanged
event fires.)
Source
Example A
If my application merely has:
var synth = window.speechSynthesis;
console.log(synth);
console.log(synth.onvoiceschanged);
Chrome developer tools console will show:
Example B
If I change the code to:
var synth = window.speechSynthesis;
console.log("BEFORE");
console.log(synth);
console.log(synth.onvoiceschanged);
console.log("AFTER");
var voices = synth.getVoices();
console.log(voices);
console.log(synth);
console.log(synth.onvoiceschanged);
The before and after states are the same, and voices is an empty array.
Solution
Although i'm not confident implementing Promises, the following worked for me:
Defining the function
var synth = window.speechSynthesis;
// declare so that values are accessible globally
var voices = [];
function set_up_speech() {
return new Promise(function(resolve, reject) {
// get the voices
var voices = synth.getVoices();
// get reference to select element
var $select_topic_speaking_voice = $("#select_topic_speaking_voice");
// for each voice, generate select option html and append to select
for (var i = 0; i < voices.length; i++) {
var option = $("<option></option>");
var suffix = "";
// if it is the default voice, add suffix text
if (voices[i].default) {
suffix = " -- DEFAULT";
}
// create the option text
var option_text = voices[i].name + " (" + voices[i].lang + suffix + ")";
// add the option text
option.text(option_text);
// add option attributes
option.attr("data-lang", voices[i].lang);
option.attr("data-name", voices[i].name);
// append option to select element
$select_topic_speaking_voice.append(option);
}
// resolve the voices value
resolve(voices)
});
}
Calling the function
// in your handler, populate the select element
if (page_title === "something") {
set_up_speech()
}

Android Chrome - turn off data saver. It was helpfull for me.(Chrome 71.0.3578.99)
// wait until the voices load
window.speechSynthesis.onvoiceschanged = function() {
window.speechSynthesis.getVoices();
};

let voices = speechSynthesis.getVoices();
let gotVoices = false;
if (voices.length) {
resolve(voices, message);
} else {
speechSynthesis.onvoiceschanged = () => {
if (!gotVoices) {
voices = speechSynthesis.getVoices();
gotVoices = true;
if (voices.length) resolve(voices, message);
}
};
}
function resolve(voices, message) {
var synth = window.speechSynthesis;
let utter = new SpeechSynthesisUtterance();
utter.lang = 'en-US';
utter.voice = voices[65];
utter.text = message;
utter.volume = 100.0;
synth.speak(utter);
}
Works for Edge, Chrome and Safari - doesn't repeat the sentences.

Related

chrome.omnibox ceases working after period of time. Begins working after restarting extension

I'm leveraging Google Chrome's omnibox API in my extension.
Current users, including myself, have noticed that the omnibox ceases responding entirely after an undetermined state change or a period of time lapsing. Typing the word to trigger entering into "omnibox" stops having any effect and the URL bar does not shift into omnibox mode.
Restarting Google Chrome does not fix the issue, but restarting my plugin by unchecking and then re-checking the 'enabled' checkbox on chrome://extensions does resolve the issue.
Does anyone have any suggestions on what to investigate? Below is the code used. It is only loaded once through my permanently persisted background page:
// Displays streamus search suggestions and allows instant playing in the stream
define([
'background/collection/streamItems',
'background/model/video',
'common/model/youTubeV2API',
'common/model/utility'
], function (StreamItems, Video, YouTubeV2API, Utility) {
'use strict';
console.log("Omnibox LOADED", chrome.omnibox);
var Omnibox = Backbone.Model.extend({
defaults: function () {
return {
suggestedVideos: [],
searchJqXhr: null
};
},
initialize: function () {
console.log("Omnibox INITIALIZED");
var self = this;
chrome.omnibox.setDefaultSuggestion({
// TODO: i18n
description: 'Press enter to play.'
});
// User has started a keyword input session by typing the extension's keyword. This is guaranteed to be sent exactly once per input session, and before any onInputChanged events.
chrome.omnibox.onInputChanged.addListener(function (text, suggest) {
// Clear suggested videos
self.get('suggestedVideos').length = 0;
var trimmedSearchText = $.trim(text);
// Clear suggestions if there is no text.
if (trimmedSearchText === '') {
suggest();
} else {
// Do not display results if searchText was modified while searching, abort old request.
var previousSearchJqXhr = self.get('searchJqXhr');
if (previousSearchJqXhr) {
previousSearchJqXhr.abort();
self.set('searchJqXhr', null);
}
var searchJqXhr = YouTubeV2API.search({
text: trimmedSearchText,
// Omnibox can only show 6 results
maxResults: 6,
success: function(videoInformationList) {
self.set('searchJqXhr', null);
var suggestions = self.buildSuggestions(videoInformationList, trimmedSearchText);
suggest(suggestions);
}
});
self.set('searchJqXhr', searchJqXhr);
}
});
chrome.omnibox.onInputEntered.addListener(function (text) {
// Find the cached video data by url
var pickedVideo = _.find(self.get('suggestedVideos'), function(suggestedVideo) {
return suggestedVideo.get('url') === text;
});
// If the user doesn't make a selection (commonly when typing and then just hitting enter on their query)
// take the best suggestion related to their text.
if (pickedVideo === undefined) {
pickedVideo = self.get('suggestedVideos')[0];
}
StreamItems.addByVideo(pickedVideo, true);
});
},
buildSuggestions: function(videoInformationList, text) {
var self = this;
var suggestions = _.map(videoInformationList, function (videoInformation) {
var video = new Video({
videoInformation: videoInformation
});
self.get('suggestedVideos').push(video);
var safeTitle = _.escape(video.get('title'));
var textStyleRegExp = new RegExp(Utility.escapeRegExp(text), "i");
var styledTitle = safeTitle.replace(textStyleRegExp, '<match>$&</match>');
var description = '<dim>' + video.get('prettyDuration') + "</dim> " + styledTitle;
return {
content: video.get('url'),
description: description
};
});
return suggestions;
}
});
return new Omnibox();
});
As far as I'm aware the code itself is fine and wouldn't have any effect on whether I see omnibox or not.
You can find full source code here: https://github.com/MeoMix/StreamusChromeExtension/blob/master/src/js/background/model/omnibox.js

WebRTC SDP object (local description) by Firefox does not contain DataChannel info unlike Chrome?

I'm testing WebRTC procedure step by step for my sake.
I wrote some testing site for server-less WebRTC.
http://webrtcdevelop.appspot.com/
In fact, STUN server by google is used, but no signalling server deployed.
Session Description Protocol (SDP) is exchanged manually by hand that is CopyPaste between browser windows.
So far, here is the result I've got with the code:
'use strict';
var peerCon;
var ch;
$(document)
.ready(function()
{
init();
$('#remotebtn2')
.attr("disabled", "");
$('#localbtn')
.click(function()
{
offerCreate();
$('#localbtn')
.attr("disabled", "");
$('#remotebtn')
.attr("disabled", "");
$('#remotebtn2')
.removeAttr("disabled");
});
$('#remotebtn')
.click(function()
{
answerCreate(
new RTCSessionDescription(JSON.parse($('#remote')
.val())));
$('#localbtn')
.attr("disabled", "");
$('#remotebtn')
.attr("disabled", "");
$('#remotebtn')
.attr("disabled", "");
});
$('#remotebtn2')
.click(function()
{
answerGet(
new RTCSessionDescription(JSON.parse($('#remote')
.val())));
$('#remotebtn2')
.attr("disabled", "");
});
$('#msgbtn')
.click(function()
{
msgSend($('#msg')
.val());
});
});
var init = function()
{
//offer------
peerCon =
new RTCPeerConnection(
{
"iceServers": [
{
"url": "stun:stun.l.google.com:19302"
}]
},
{
"optional": []
});
var localDescriptionOut = function()
{
console.log(JSON.stringify(peerCon.localDescription));
$('#local')
.text(JSON.stringify(peerCon.localDescription));
};
peerCon.onicecandidate = function(e)
{
console.log(e);
if (e.candidate === null)
{
console.log('candidate empty!');
localDescriptionOut();
}
};
ch = peerCon.createDataChannel(
'ch1',
{
reliable: true
});
ch.onopen = function()
{
dlog('ch.onopen');
};
ch.onmessage = function(e)
{
dlog(e.data);
};
ch.onclose = function(e)
{
dlog('closed');
};
ch.onerror = function(e)
{
dlog('error');
};
};
var msgSend = function(msg)
{
ch.send(msg);
}
var offerCreate = function()
{
peerCon
.createOffer(function(description)
{
peerCon
.setLocalDescription(description, function()
{
//wait for complete of peerCon.onicecandidate
}, error);
}, error);
};
var answerCreate = function(descreption)
{
peerCon
.setRemoteDescription(descreption, function()
{
peerCon
.createAnswer(
function(description)
{
peerCon
.setLocalDescription(description, function()
{
//wait for complete of peerCon.onicecandidate
}, error);
}, error);
}, error);
};
var answerGet = function(description)
{
peerCon.setRemoteDescription(description, function()
{ //
console.log(JSON.stringify(description));
dlog('local-remote-setDescriptions complete!');
}, error);
};
var error = function(e)
{
console.log(e);
};
var dlog = function(msg)
{
var content = $('#onmsg')
.html();
$('#onmsg')
.html(content + msg + '<br>');
}
Firefox(26.0):
RtpDataChannels
onopen event is fired successfully, but send fails.
Chrome(31.0):
RtpDataChannels
onopen event is fired successfully, and send also succeeded.
A SDP object by Chrome is as follows:
{"sdp":".................. cname:L5dftYw3P3clhLve
\r\
na=ssrc:2410443476 msid:ch1 ch1
\r\
na=ssrc:2410443476 mslabel:ch1
\r\
na=ssrc:2410443476 label:ch1
\r\n","type":"offer"}
where the ch1 information defined in the code;
ch = peerCon.createDataChannel(
'ch1',
{
reliable: false
});
is bundled properly.
However, a SDP object (local description) by Firefox does not contain DataChannel at all, and moreover, the SDP is much shorter than Chrome, and less information bundled.
What do I miss?
Probably, I guess the reason that send fails on DataChannel is due to this lack of information in the SDP object by firefox.
How could I fix this?
I investigated sources of various working libraries, such as peerJS, easyRTC, simpleWebRTC, but cannot figure out the reason.
Any suggestion and recommendation to read is appreciated.
[not an answer, yet]
I leave this here just trying to help you. I am not much of a WebRTC developer. But, curious i am, this quite new and verry interresting for me.
Have you seen this ?
DataChannels
Supported in Firefox today, you can use DataChannels to send peer-to-peer
information during an audio/video call. There is
currently a bug that requires developers to set up some sort of
audio/video stream (even a “fake” one) in order to initiate a
DataChannel, but we will soon be fixing that.
Also, i found this bug hook, witch seems to be related.
One last point, your version of adapter.js is different from the one served on code.google. And .. alot. the webrtcDetectedVersion part is missing in yours.
https://code.google.com/p/webrtc/source/browse/stable/samples/js/base/adapter.js
Try that, come back to me with good newz. ?
After last updating, i have this line in console after clicking 'get answer'
Object { name="INVALID_STATE", message="Cannot set remote offer in
state HAVE_LOCAL_OFFER", exposedProps={...}, more...}
but this might be useless info ence i copy pasted the same browser offre to answer.
.. witch made me notice you are using jQuery v1.7.1 jquery.com.
Try updating jQuery (before i kill a kitten), and in the meantime, try make sure you use all updated versions of scripts.
Woups, after fast reading this : https://developer.mozilla.org/en-US/docs/Web/Guide/API/WebRTC/WebRTC_basics then comparing your javascripts, i see no SHIM.
Shims
As you can imagine, with such an early API, you must use the browser
prefixes and shim it to a common variable.
> var PeerConnection = window.mozRTCPeerConnection ||
> window.webkitRTCPeerConnection; var IceCandidate =
> window.mozRTCIceCandidate || window.RTCIceCandidate; var
> SessionDescription = window.mozRTCSessionDescription ||
> window.RTCSessionDescription; navigator.getUserMedia =
> navigator.getUserMedia || navigator.mozGetUserMedia ||
> navigator.webkitGetUserMedia;

Repeatedly Grab DOM in Chrome Extension

I'm trying to teach myself how to write Chrome extensions and ran into a snag when I realized that my jQuery was breaking because it was getting information from the extension page itself and not the tab's current page like I had expected.
Quick summary, my sample extension will refresh the page every x seconds, look at the contents/DOM, and then do some stuff with it. The first and last parts are fine, but getting the DOM from the page that I'm on has proven very difficult, and the documentation hasn't been terribly helpful for me.
You can see the code that I have so far at these links:
Current manifest
Current js script
Current popup.html
If I want to have the ability to grab the DOM on each cycle of my setInterval call, what more needs to be done? I know that, for example, I'll need to have a content script. But do I also need to specify a background page in my manifest? Where do I need to call the content script within my extension? What's the easiest/best way to have it communicate with my current js file on each reload? Will my content script also be expecting me to use jQuery?
I know that these questions are basic and will seem trivial to me in retrospect, but they've really been a headache trying to explore completely on my own. Thanks in advance.
In order to access the web-pages DOM you'll need to programmatically inject some code into it (using chrome.tabs.executeScript()).
That said, although it is possible to grab the DOM as a string, pass it back to your popup, load it into a new element and look for what ever you want, this is a really bad approach (for various reasons).
The best option (in terms of efficiency and accuracy) is to do the processing in web-page itself and then pass just the results back to the popup. Note that in order to be able to inject code into a web-page, you have to include the corresponding host match pattern in your permissions property in manifest.
What I describe above can be achieved like this:
editorMarket.js
var refresherID = 0;
var currentID = 0;
$(document).ready(function(){
$('.start-button').click(function(){
oldGroupedHTML = null;
oldIndividualHTML = null;
chrome.tabs.query({ active: true }, function(tabs) {
if (tabs.length === 0) {
return;
}
currentID = tabs[0].id;
refresherID = setInterval(function() {
chrome.tabs.reload(currentID, { bypassCache: true }, function() {
chrome.tabs.executeScript(currentID, {
file: 'content.js',
runAt: 'document_idle',
allFrames: false
}, function(results) {
if (chrome.runtime.lastError) {
alert('ERROR:\n' + chrome.runtime.lastError.message);
return;
} else if (results.length === 0) {
alert('ERROR: No results !');
return;
}
var nIndyJobs = results[0].nIndyJobs;
var nGroupJobs = results[0].nGroupJobs;
$('.lt').text('Indy: ' + nIndyJobs + '; '
+ 'Grouped: ' + nGroupJobs);
});
});
}, 5000);
});
});
$('.stop-button').click(function(){
clearInterval(refresherID);
});
});
content.js:
(function() {
function getNumberOfIndividualJobs() {...}
function getNumberOfGroupedJobs() {...}
function comparator(grouped, individual) {
var IndyJobs = getNumberOfIndividualJobs();
var GroupJobs = getNumberOfGroupedJobs();
nIndyJobs = IndyJobs[1];
nGroupJobs = GroupJobs[1];
console.log(GroupJobs);
return {
nIndyJobs: nIndyJobs,
nGroupJobs: nGroupJobs
};
}
var currentGroupedHTML = $(".grouped_jobs").html();
var currentIndividualHTML = $(".individual_jobs").html();
var result = comparator(currentGroupedHTML, currentIndividualHTML);
return result;
})();

continous speech recognition with Webkit speech api

i started using this browser(chrome) feature.
i ve written a JS based on this , but the problem is even , it recognises the speech only once and ends . its not going continuously, i need to press the button again and again to start speech recognition . tell me where i should tweak . i ve set "recognition.continuous=true" still not helping ?
var recognition = new webkitSpeechRecognition();
recognition.continuous = true;
recognition.interimResults = true;
recognition.onstart = function() {
console.log("Recognition started");
};
recognition.onresult = function(event){
console.log(event.results);
};
recognition.onerror = function(e) {
console.log("Error");
};
recognition.onend = function() {
console.log("Speech recognition ended");
};
function start_speech() {
recognition.lang = 'en-IN'; // 'en-US' works too, as do many others
recognition.start();
}
I call "start_speech" from a button ! thats it
I know this is an old thread, but I had this problem too. I found that, even with the continuous flag set, if there are pauses in the input speech, a "no-speech" error gets thrown (triggers the onerror event) and the engine is shut down. I just added code in the onend to restart the engine:
recognition.onend = function() {
recognition.start();
};
The next problem you might get is, every time the engine restarts, the user has to re-grant permission to let the browser use the microphone. The only solution at this time seems to be to make sure you connect to your site over HTTPS (source: http://updates.html5rocks.com/2013/01/Voice-Driven-Web-Apps-Introduction-to-the-Web-Speech-API bottom of post in bold)
Perhaps the typo on this line:
recognition.continuos = true;
Should equal:
recognition.continuous = true;
recognition.onend = function(){
recognition.start();
// sets off a beep/noise each time it is accessed from a cell phone (Andoid).
// does NOT if accessed from a desktop (Windows using Chrome).
};

chrome indexed database setVersion request filled with exceptions

I am trying to get the following code to work on chrom by using setVersion (as onupgradeneeded is not available yet).
The IDBVersionChangeRequest is filled with IDBDatabaseException. And the onsuccess function could not be called. I need to create an ObjectStore within the onsuccess function.
specifically this line: request = browserDatabase._db.setVersion(browserDatabase._dbVersion.toString());
Below is my code. Any help would be greatly appreciated...
browserDatabase._db = null;
browserDatabase._dbVersion = 4;
browserDatabase._dbName = "mediaStorageDB";
browserDatabase._storeName = "myStore";
var request = indexedDB.open(browserDatabase._dbName);
// database exist
request.onsuccess = function(e)
{
browserDatabase._db = e.target.result;
// this is specifically for chrome, because it does not support onupgradeneeded
if (browserDatabase._dbVersion != browserDatabase._db.version)
{
request = browserDatabase._db.setVersion(browserDatabase._dbVersion.toString());
request.onerror = function(e) { alert("error") };
request.onblocked = function(e)
{
b = 11; // for some reason the code goes here...
}
request.onsuccess = function(e)
{
browserDatabase._db.createObjectStore(browserDatabase._storeName, {autoIncrement: true});
}
}
}
In your code sample you say you come in to the onblocked callback. The only way you can get in this callback is when you have still open transactions/connections to your db. (aside the one you are working in.) This means you will have to close all other transactions/connections before you can call the setVersion.
When wired things happen to IndexedDB, I "Clear data from hosted apps", quit Chrome windows and take a cup of coffee. After that everything work fine. :-D
browserDatabase._dbVersion < browserDatabase._db.version. Downgrading is not possible. dbVersion = 4 should not be consider lightly. You might open other tab with dbVersion = 5, or browser may be waining your response elsewhere or itself updating. All these are not worth to trace the reasons behind.