Running HTML5 webkitSpeechRecognition API programmatically - html

I am interested in running the webkitSpeechRecognition API programmatically. I want to take an audio file that is uploaded to a server and use the webkitSpeechRecognition API on the back-end to recognize the text and return the result to the client.
One possibility is running some form of "embedded" version of Chrome, but I'm not sure how I would pass in the audio input. Another would be to use some form of C++ bindings to access the API, but I'm not sure if this is overly complicated.
Is this possible? How could this be accomplished?

I've done this before, but not on any large scale. I used this software,
http://vb-audio.pagesperso-orange.fr/Cable/index.htm
that I found from this link
Play audio as microphone input
With that you can recognize anything you play through your speakers, the program makes a virtual mic that streams audio from virtual speakers that it creates.
As far as your embedded version of chrome, you could try grabbing the chromium source and replacing the code where they read from the mic with code to read from a file, I don't know how far you are going to get with that though, I've never read that code.

Related

Streaming video in HTML5/JS

I currently have an IPTV subscription and as a fun little side project I decided to create a multiplatform IPTV app. However, I'm running into some trouble when trying to stream video.
The .m3u playlist I am currently using has streaming links, however they do not end in .m3u8 as I am usually accustom to.
When I do a get request to the link in insomnia, it begins to download content with a MIME type of video/mp2t.
I have tried using hls.js along with a few other html video players however I cannot seem to get it to work.
The playlist does work with VLC!
I feel like I am missing something, just not sure what.
Thanks!!
If your IPTV service has content that the service provider wants to restrict then they may encrypt the content, use DRM and/or obfuscate the access to the manifest files and segments streams.
The reason they do this is to ensure that only their apps can be used to playback the content - this does not always have to be just for paid services as the content owner may require the content to be encrypted even for free IPTV services.
You can still do your experiment and build your multiscreen project using test streams which are available online - there are both DASH and HLS online streams available in a number of places. see here for a useful list:
https://bitmovin.com/mpeg-dash-hls-examples-sample-streams/

Why Audio Transforming Functions and Audio Visualizations are not connecting to streaming source

I am working with Web Audio API and I combined two project repositories at GitHub to get a cool app that gets a streaming user input, transforms the input and provides an output for the user to download. I have a problem to connect the streaming voice with filters(pitch etc.), audio visualizations and a distortion filter. I am a student and I do not have much experience with audio API. Can anybody help me? (First Repository: https://github.com/mdn/web-dictaphone, Second Repository: https://github.com/urtzurd/html-audio/tree/gh-pages/static, My project Repository: https://github.com/PatrykWajs/web-dictaphone).
Thank you for your time.
I managed to combine both repositories as one project but I cannot understand why things cannot connect. I tried various things.
There Aren't Any errors. It seems like everything is working fine but streaming voice input is not connected to filters and visualization.
I think it is working just fine; however, you're getting hit by the autoplay restrictions. If you open up the developer console, you'll see a message that tells you so - the AudioContext will not go into playing mode unless it's created (or resume()ed) from within a user gesture (like a click handler). You can test this by going into the site settings (right-click the lock icon in the address bar, click "Site Settings") and set "Audio" to "Allow" instead of "auto".

BackgroundAudioAgent with a dynamic playlist

I am developing an app that has the capability of playing audio tracks streamed from a server. This app needs to be able to playback the audio even when the screen is locked or if the app is put in background.
=====
For background audio playback in windows phone, the Background Audio Agent is required.
The sample provided by Microsoft shows the basics: http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh202978(v=vs.105).aspx
In the sample, the background audio agent has a list of static tracks and when the user taps the skip/prev in the main project, it simply forwards it to the singleton BackgroundAudioPlayer object which in turn uses the event handler in BackgroundAudioAgent project to load the next/prev song.
BUT, what I would think a common usecase is that the main project will have the details of the play list (like a list retrieved from server) and we just need the backgroundaudio agent to forward that request to the main project.
My question is
Is there ANY way to forward the user action from audio agent to the main project so the main project can determine which track to play?
P.S: I cannot use MediaElement (which seemed to work fine in Window Store App and provides background support) because in Windows Phone SDK it has no background support.
EDIT: When the screen is locked, The application itself can be terminated even if the background agent is running, so I guess there no mechanism to forward the request to the app. This would mean, The background agent has to be self sufficient... Which would be a poor design to have to jump through hoops for a seemingly common behavior (Playing audio stored in remote server that requires authentication) .
At this point, I am considering writing all URL specific information to a file and have the background audio agent read that saved file and authenticate with server and create audio tracks. But the handshake to show the current audio information when the application resumes will be complex to say the least.
I hope I am wrong with this and there is actually a easier way than this. Would love to see how others have handled this.

Is there a way to persist cookies or HTML5 localStorage across WebBrowser instances on Windows Phone?

Short version: I have a WebBrowser control hosted in a Windows Phone 8 app. How can I store values from javascript so that they persist across the user closing and reopening my app?
Long version:
I'm developing a Windows Phone 8 application that has a single WebBrowser control hosted in a single MainPage.xaml page that lives for the entire life of my app. I created the app with the "Windows Phone HTML5 App" project type when creating the project in Visual Studio 2012. 99% of my application is hosted in web pages (on the internet, not stored on the phone) that I direct the WebBrowser to go to when the app starts up. In my application's web pages I'm trying to persist data across pages and across sessions. For example, once the user logs in once then I want to store that on the phone so the next time they start the app they don't have to log in again.
Cookies and HTML5 Local Storage (via window.localStorage.setItem and getItem) both work fine for sharing data across pages in the app while the app is running and even if you switch out of the app (via the Windows phone "hard button") and go back in. But if the user exits the app by pressing the hard "back" button then the next time the app is started all localStorage and cookies seem to be gone.
Is this the expected behavior? I guess I'm not sure where WebBrowser would store the data (Isolated Storage? Or maybe in the same place it's stored if going to the web site with Internet Explorer?). In any case, if there's no "fix" for this, can anyone the best way for me to provide my own storage mechanism so that I can let my javascript code persist values across instances of my app running? I'm happy to use the app's Isolated Storage if only I knew of a way to get and retrieve values from it using javascript. Thank you.
I'm not sure if this is expected behaviour or not.
To get at the Isolated Storage you will need to use JS/.NET interop.
if you want to trigger the persistent storage from JS:
Use window.external.notify in JS, generating a JSON string (for instance) to pass along to the .NET side. That could be written to IsolatedStorage without the .NET having to parse the data. You could use IsolatedStorage.AppSettings or a full file depending on the size of the data.
Alternately you could trigger the process from .NET:
Call WebBrowser.InvokeScript to call a JS function which returns the same JSON string representing your data.
The .NET side could detect and restore this data on startup and use WebBrowser.InvokeScript to pass the JSON string back into the WebBrowser via a JS function.
You'd of course have to deal with error cases (attempting to restore bad/corrupt JSON).
Also, if you trigger this from .NET in response to the App.Closing event you need to watch out that you don't take too long writing data.
The faster you run the better, but this definitely needs to be done within 10 seconds or the OS will kill your app.
See MSDN docs for WebBrowser.InvokeScript() and ScriptNotify registration to window.external.notify.

Chrome Extension Development - need help getting started

I'd like to try my hand at some Chrome Extension Development. The most I have done with extensions is writing some small Greasemonkey scripts in the past.
I would like to use localStorage to store some data and then reveal the data on a extension button click later on. (Its seems like this would be done with a popup page)
How do I run a script everytime a page from lets say http://www.facebook.com/* is loaded?
How do I get access to the page? I think based off my localStorage requirement I would have to go down the background_page route (correct?) Can the background page and popup page communicate across the localStorage?
UPDATE:
I'm actually looking to learn the "Chrome way". I'm not really looking to run an existing Greasemonkey script
Google actually has some pretty good documentation on creating extensions. I recommend thoroughly reading the following two articles if you haven't already done so:
http://code.google.com/chrome/extensions/getstarted.html
http://code.google.com/chrome/extensions/overview.html
If you want to give your extension access when the user browses to Facebook, you'll need to declare that in the extension's manifest.
Unless you're wanting to save data beyond the life of the browser process, you probably don't need to use local storage. In-memory data can just be stored as part of the background page.
Content scripts (which run when you load a page) and background pages (which exist for the duration of the browser process) can communicate via message passing, which is described here:
http://code.google.com/chrome/extensions/messaging.html
Overall, I'd suggest spending some time browsing the Developer's Guide and becoming familiar with the concepts and examples.
Chrome has a feature to automatically convert greasemonkey scripts to extensions!