Is Speech Synthesis API supported by Chromium? Do I need to install voices? If so how can I do that? I'm using Fedora. Is voices like video that I need to install extra package for it to work?
I've tried this code:
var msg = new SpeechSynthesisUtterance('I see dead people!');
msg.voice = speechSynthesis.getVoices().filter(function(voice) {
return voice.name == 'Whisper';
})[0];
speechSynthesis.speak(msg);
from article Web apps that talk - Introduction to the Speech Synthesis API
but the function speechSynthesis.getVoices() return empty array.
I've also tried:
window.speechSynthesis.onvoiceschanged = function() {
console.log(window.speechSynthesis.getVoices())
};
the function get executed but the array is also empty.
On https://fedoraproject.org/wiki/Chromium there is info to use --enable-speech-dispatcher flag but when I've use it I've got warning that flag is not supported.
Is Speech Synthesis API supported by Chromium?
Yes, the Web Speech API has basic support at Chromium browser, though there are several issues with both Chromium and Firefox implementation of the specification, see see Blink>Speech, Internals>SpeechSynthesis, Web Speech.
Do I need to install voices? If so how can I do that? I'm using
Fedora. Is voices like video that I need to install extra package for
it to work?
Yes, voices need to be installed. Chromium is not shipped with voices to set at SpeechSynthesisUtterance voice attribute by default, see How to use Web Speech API at chromium?; How to capture generated audio from window.speechSynthesis.speak() call?.
You can install speech-dispatcher as a server for the system speech synthesis server and espeak as the speech synthesizer.
$ yum install speech-dispatcher espeak
You can also set a configuration file for speech-dispatcher in the user home folder to set specific options for both speech-dispatcher and the output module that your use, for example espeak
$ spd-conf -u
Launching Chromium with --enable-speech-dispatcher flag automatically spawns a connection to speech-dispatcher, where you can set the LogLevel between 0 and 5 to review SSIP communication between Chromium code and speech-dispatcher.
.getVoices() returns results asynchronously and needs to be called twice
see this electron issue at GitHub Speech Synthesis: No Voices #586.
window.speechSynthesis.onvoiceschanged = e => {
const voices = window.speechSynthesis.getVoices();
// do speech synthesis stuff
console.log(voices);
}
window.speechSynthesis.getVoices();
or composed as an asynchronous function which returns a Promise with value being array of voices
(async() => {
const getVoices = (voiceName = "") => {
return new Promise(resolve => {
window.speechSynthesis.onvoiceschanged = e => {
// optionally filter returned voice by `voiceName`
// resolve(
// window.speechSynthesis.getVoices()
// .filter(({name}) => /^en.+whisper/.test(name))
// );
resolve(window.speechSynthesis.getVoices());
}
window.speechSynthesis.getVoices();
})
}
const voices = await getVoices();
console.log(voices);
})();
Related
After preparing the migration of my chrome manifest V2 extension to manifest V3 and reading about the problems with persistent service workers I prepared myself for a battle with the unknown. My V2 background script uses a whole bunch of globally declared variables and I expected I need to refactor that.
But to my great surprise my extension background script seems to work out of the box without any trouble in manifest V3. My extension uses externally_connectable. The typical use case for my extension is that the user can navigate to my website 'bla.com' and from there it can send jobs to the extension background script.
My manifest says:
"externally_connectable": {
"matches": [
"*://localhost/*",
"https://*.bla.com/*"
]
}
My background script listens to external messages and connects:
chrome.runtime.onMessageExternal.addListener( (message, sender, sendResponse) => {
log('received external message', message);
});
chrome.runtime.onConnectExternal.addListener(function(port) {
messageExternalPort = port;
if (messageExternalPort && typeof messageExternalPort.onDisconnect === 'function') {
messageExternalPort.onDisconnect(function () {
messageExternalPort = null;
})
}
});
From bla.com I send messages to the extension as follows
chrome.runtime.sendMessage(EXTENSION_ID, { type: "collect" });
From bla.com I receive messages from the extension as follows
const setUpExtensionListener = () => {
// Connect to chrome extension
this.port = chrome.runtime.connect(EXTENSION_ID, { name: 'query' });
// Add listener
this.port.onMessage.addListener(handleExtensionMessage);
}
I tested all scenarios including the anticipation of the famous service worker unload after 5 minutes or 30 seconds inactivity, but it all seems to work. Good for me, but something is itchy. I cannot find any documentation that explains precisely under which circumstances the service worker is unloaded. I do not understand why things seem to work out of the box in my situation and why so many others experience problems. Can anybody explain or refer to proper documentation. Thanks in advance.
I want to play stream from gstreamer in a web browser.
I played around a with RTP, WebRTC and SDP files but, while VLC was able to connect to stream by simple SDP, browsers were not. I later understood that WebRTC requires secure connection which only complicates things and is not needed for my purposes. I stumbled upon Media Source Extension (MSE) of html5, which seems that it could help, but I'm not able to find some comprehensive tutorial or appropriate specs on how to get gstreamer to stream correct data and later how to play them using MSE. I'm also not sure about latency with using MSE.
So is there a way to play stream from gstreamer in a browser?
Thanks.
Using node webrtc project, I was able to combine output from gstreamer with webrtc call. For gstreamer, there is a project which enables it's use with node gstreamer superficial. So basically, you need to run gstremaer process from node process, which can then control output from gstremaer. On every gstreamer frame there is a callback called which takes the frame and can send it to webrtc calls.
Then an webrtc calls needs to be implemented. There is required some signaling protocol for calls. One side of the call will be the server and another will be the client's browser, instead of two browsers. Then a video track will be created where frames from gstreamer superficial will be pushed.
const { RTCVideoSource } = require("wrtc").nonstandard;
const gstreamer = require("gstreamer-superficial");
const source = new RTCVideoSource();
// This is WebRTC video track which should be used with addTransceiver see below
const track = source.createTrack();
const frame = {
width: 1920,
height: 1080,
data: null
};
const pipeline = new gstreamer.Pipeline("v4l2src ! videorate ! video/x-raw,format=YUY2,width=1920,height=1080,framerate=25/1 ! videoconvert ! video/x-raw,format=I420 ! appsink name=sink");
const appsink = pipeline.findChild("sink");
const pull = function() {
appsink.pull(function(buf, caps) {
if (buf) {
frame.data = new Uint8Array(buf);
try {
source.onFrame(frame);
} catch (e) {}
pull();
} else if (!caps) {
console.log("PULL DROPPED");
setTimeout(pull, 500);
}
});
};
pipeline.play();
pull();
// Example:
const useTrack = SomeRTCPeerConnection => SomeRTCPeerConnection.addTransceiver(track, { direction: "sendonly" });
I want to send streaming data (as sequences of ArrayBuffer) from a Chrome extension to a Chrome App, since Chrome message API (includes chrome.runtime.sendMessage, postMessage...) does not support ArrayBuffer and JS arrays have poor performance, I have to try other methods. Eventually, I found WebRTC over RTCDataChannel might a good solution in my case.
I have succeeded to send string over a RTCDataChannel, but when I tried to send ArrayBuffer I got:
code: 19
message: "Failed to execute 'send' on 'RTCDataChannel': Could not send data"
name: "NetworkError"
It seems that it's not a bandwidths limits problem since it failed even though I sent one byte of data. Here is my code:
pc = new RTCPeerConnection(configuration, { optional: [ { RtpDataChannels: true } ]});
//...
var dataChannel = m.pc.createDataChannel("mydata", {reliable: true});
//...
var ab = new ArrayBuffer(8);
dataChannel.send(ab);
Tested on OSX 10.10.1, Chrome M40 (Stnble), M42(Canary); and on Chromebook M40.
I have filed a bug for WebRTC here.
I modified my code, now everything worked amazing:
removed RtpDataChannels option when creating RTCPeerConnection.(YES, remove RtpDataChannels option if you want data channel, what a magic world!)
on receiver side: no need createDataChannel, instead, handle onmessage, onxxx by using event.channle from pc.ondatachannel callback:
pc.ondatachannel function(event)
var receiveChannel = event.channel;
receiveChannel.onmessage = function(event){
console.log("Got Data Channel Message:", event.data);
};
};
I have created two Chrome apps and I want to pass some data (string format) from one Chrome app to another Chrome app. Appreciate if someone can help me with showing the correct way of doing this?
It's an RTFM question.
From Messaging documentation (note that it mentions extensions, but it works for apps):
In addition to sending messages between different components in your extension, you can use the messaging API to communicate with other extensions. This lets you expose a public API that other extensions can take advantage of.
You need to send messages using chrome.runtime.sendMessage (using app ID) and receive them using chrome.runtime.onMessageExternal event. If required, long-lived connections can also be established.
// App 1
var app2id = "abcdefghijklmnoabcdefhijklmnoab2";
chrome.runtime.onMessageExternal.addListener(
// This should fire even if the app is not running, as long as it is
// included in the event page (background script)
function(request, sender, sendResponse) {
if(sender.id == app2id && request.data) {
// Use data passed
// Pass an answer with sendResponse() if needed
}
}
);
// App 2
var app1id = "abcdefghijklmnoabcdefhijklmnoab1";
chrome.runtime.sendMessage(app1id, {data: /* some data */},
function(response) {
if(response) {
// Installed and responded
} else {
// Could not connect; not installed
// Maybe inspect chrome.runtime.lastError
}
}
);
I am trying to use these two methods (of WP 8) in windows phone 8.1, but it gives error and doesn't compile, most probably becasue they are removed. I tried searching the new APIs but couldn't get any. What are other alternatives for these.
Dispatcher.BeginInvoke( () => {}); msdn link
System.Threading.Thread.Sleep(); msdn link
They still exists for Windows Phone 8.1 SIlverlight Apps, but not for Windows Phone Store Apps. The replacements for Windows Store Apps is:
Sleep (see Thread.Sleep replacement in .NET for Windows Store):
await System.Threading.Tasks.Task.Delay(TimeSpan.FromSeconds(30));
Dispatcher (see How the Deployment.Current.Dispatcher.BeginInvoke work in windows store app?):
CoreDispatcher dispatcher = CoreWindow.GetForCurrentThread().Dispatcher;
await dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { });
Dispatcher.BeginInvoke( () => {}); is replaced by
await this.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, async () => {});
and System.Threading.Thread.Sleep(); is replaced by
await Task.Delay(TimeSpan.FromSeconds(doubleValue));
Be aware that not only has the API changed (adopting the API from WindowsStore apps), but the way that the Dispatcher was obtained in windowsPhone 8.0 has changed as well.
#Johan Faulk's suggestion, although will work, may return null under a multitude of conditions.
Old code to grab the dispatcher:
var dispatcher = Deployment.Current.Dispatcher;
or
Deployment.Current.Dispatcher.BeginInvoke(()=>{
// any code to modify UI or UI bound elements goes here
});
New in Windows 8.1 Deployment is not an available object or namespace.
In order to make sure the Main UI Thread dispatcher is obtained, use the following:
var dispatcher = CoreApplication.MainView.CoreWindow.Dispatcher;
or
CoreApplication.MainWindow.CoreWindow.Dispatcher.RunAsync(
CoreDispatcherPriority.Normal,
()=>{
// UI code goes here
});
Additionally, although the method SAYS it will be executed Async the keyword await can not be used in the method invoked by RunAsync. (in the above example the method is anonymous).
In order to execute an awaitable method inside anonymous method above, decorate the anonymous method inside RunAsync() with the async keyword.
CoreApplication.MainWindow.CoreWindow.Dispatcher.RunAsync(
CoreDispatcherPriority.Normal,
**async**()=>{
// UI code goes here
var response = **await** LongRunningMethodAsync();
});
For Dispatcher, try this. MSDN
private async Task MyMethod()
{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () => { });
}
For Thread.Sleep() try await Task.Delay(1000). MSDN