Client-side Speech Recognition on a mobile browser? - google-chrome

I am working on a project that is targeted for browsers on smart phones. And I can't seem to find any way to do a client-side speech recognition, as the mobile version of chrome doesn't even support their own Web Speech API. Does anybody know how to have speech recognition working on a mobile browser like Chrome or Firefox? Or is there a work around that can be used? like a 3rd-party service that provide APIs to be called.

Potential duplicate: Safari Speech Recognition
NOPE: https://caniuse.com/#feat=speech-recognition
NOPE: https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API
Not client side but might be of some assistance: https://cloud.google.com/speech-to-text/docs/streaming-recognize
Also this: https://github.com/Kitt-AI/snowboy (seems like need to set up your RasPi)
And this: https://github.com/tensorflow/tfjs I can totally imagine doing all the machine learning in the browser, just like you can run Windows 95 and what not.

Related

SpeechRecognization for firefox web extension

As chrome has webkitspeechrecognition api for speech detection , what can we use for firefox webextension(web speech api not working for me).
I'm making an extension which will continuously listen for speech and then process it.
I have already made a chrome extension which is up and running,so wanted to extend it to Firefox, need an alternative to this line(which is for chrome extension)
recognition = new webkitSpeechRecognition();
I haven't used the Speech Recognition API yet, but you should be able to use
recognition = new SpeechRecognition();
In order for this to work, you need to set
media.webspeech.recognition.enable
to true in about:config. Source: https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API
There are a number of outstanding bugs around the implementation of Web Speech in Firefox, so I'm not sure the above is implemented or working well. The basic implementation was done in Firefox 44 / 49.

Casting without dongle and multi device "screen sharing"

Since the Chromecast dongle is using a lightweight version of chrome and able to display (cast;-) content from the cloud, would it not make sense that the fuller version of chrome (on windows, Android, and other platforms) would also be able to be a display device?
In other words, it would allow to cast from a smart phone to a tablet, laptop, or anything that runs Chrome. Simply have these devices added to the list of castable mediums...
Additionally, it looks like a simple and great way to make the technology also support multiple screen sharing...
I suspect you are confusing chrome the web browser (https://en.wikipedia.org/wiki/Google_Chrome) with chrome the operating system (http://en.wikipedia.org/wiki/Chrome_OS).
Chromecast as currently available is only good for accepting comands from compatible software (chrome the browser, the youtube app on android or ios etc) at which point it then streams the requested data from the internet, not from your phone/tablet/computer.
Your idea is not supported, which is stunning as Android has had built in support for screen mirroring via miracast since v4.2.
There's also no mention of direct content streaming, for instance via DLNA, an open protocol designed for EXACTLY THAT.
In fact, a better approach here would have been to make the chromecast dongle a DLNA device and then implement support for it on the google play music and youtube ends.
https://developers.google.com/cast/devprev - don't notice that ability off-hand... it mentions having to email mac addresses and websites and things to 'whitelist' for development, if you could just use your browser I wouldn't guess the dev proccess would be so convoluted.

can a google chrome extension flash an OS style alert?

for example, growl flashes a message in the upper right corner regardless of which application is running. Skype makes the icon in the dock have a little red dot with a number of new messages. Is there anyway to write a chrome extension to have this type of functionality? that is, I want to write a chat system that works in the browser but also notifies users when they have closed my site's tab, or even when chrome is not running.
I could write a native client in addition to my browser based client, but that's double the work. (Tripple the work if you bother with a native window's client vs. just OSX but who would do that?)
Chrome can create "Desktop Notifications" See the Documentation
It's also possible to have the notifications visible when chrome is closed providing you create a "background process". this question/answer might point you in the right direction.
Chrome supports the text-only version of the W3C desktop notification standard. Chrome deprecated the HTML version about a year ago and will stop supporting it very soon. Any webpage viewed in Chrome and any Chrome extension can use this API.
The Rich Notifications API is available to Chrome extensions and packaged apps. As of today, it's dev-channel only and is iterating rapidly. It has implementations on ChromeOS and Windows, with Mac on the way. Linux currently delegates to the W3C implementation. The API is not a 1:1 replacement of W3C HTML desktop notifications, but it does provide many layout options for common use cases.

Are there any web standards for voice over IP?

Web browser plug-ins such as Flash already provide VoIP functionality in a web browser, but is it possible to have browser VoIP without any plug-ins?
Ericsson Labs has posted information using the device element to allow for microphone input. This, in addition with WebSockets could be used to implement VoIP. However, the device element is not implemented in any web browser yet.
No there isn't but the device element will likely be the way in the future, as you mentioned. I don't think it will take too long for browsers to look into it however. There are also the WAC APIs, but they are only on mobile, and not shipping quite yet.
Update: There is now a standard in development called WebRTC. Drafts of this spec are supported by Chrome and Firefox. Microsoft have made an alternative proposal called CU-RTC-Web

Does Chrome have built-in speech recognition for "x-webkit-speech" input elements?

I'm wondering how
<input type="text" x-webkit-speech speech />
Is there a speech recognition enging built into Chrome or is it accessing an underlying speech recognition facility in the operating system?
Yup, Chrome does speech recognition via Google's servers. But there's no reason that other browsers couldn't choose to implement it differently (for example using some speech recognition facility in the OS).
Balu, your link is actually a bit out of date. The latest Google proposal can be found here: http://www.w3.org/2005/Incubator/htmlspeech/2010/10/google-api-draft.html
Although speech recognition has been available in the Chrome dev channel for some time, it has not shipped yet and we're not yet sure when it will ship. We definitely want people to play with the API and offer feedback on it, but we don't think it's quite ready for prime time yet.
According to the code it sends the audio data as a POST request to:
https://www.google.com/speech-api/v1/recognize?client=chromium&lang=??&lm=??&xhw=??&maxresults=3
lm is grammar in the code, xhw is hardware_info which is optional according to a comment. The audio appears to be speex, x-speex-with-header-byte:
// Encode the frame and place the size of the frame as the first byte. This
// is the packet format for MIME type x-speex-with-header-byte.
It looks like it would be pretty trivial to modify the chrome code to use in your own app.
Update:
You also need to get a speech recognition API key and they are limited to 50 requests per day. There is no way to increase that limit - not even by paying.
There is an experimental fork of speexenc that can encode x-speex-with-header-byte MIME binary format, its referenced on the QXIP Wiki and is available on GitHub. Does the job fine by placing the size of the frame as the first byte of packets.
They are using their own API for speech recognition. Ex: sending a post request to there servers.
Speech recognition is a proposal by Google. https://docs.google.com/View?id=dcfg79pz_5dhnp23f5
The feature ships with Chrome 8+ and it looks like it sends the data to google servers to perform the actual recognition.
This feature now works on chrome 11 beta.
check this out..
http://slides.html5rocks.com/#speech-input
This might be of interest https://github.com/taf2/speech2text ruby bindings for the google speech to text API
Yes, Chrome does have built-in speech support through WebKit; just look at the Google homepage (which now has a microphone to the right of the search box). I wonder, however, if the Chrome team is working on Omnibox speech support. After all, Chrome is a WebKit-based browser!
I just confirmed this on my Chrome Cr-48, it works.
There is also a working group that produced http://www.w3.org/TR/xhtml+voice/ but I don't believe this is implemented in any browser except Opera.