Why Audio Transforming Functions and Audio Visualizations are not connecting to streaming source - html5-audio

I am working with Web Audio API and I combined two project repositories at GitHub to get a cool app that gets a streaming user input, transforms the input and provides an output for the user to download. I have a problem to connect the streaming voice with filters(pitch etc.), audio visualizations and a distortion filter. I am a student and I do not have much experience with audio API. Can anybody help me? (First Repository: https://github.com/mdn/web-dictaphone, Second Repository: https://github.com/urtzurd/html-audio/tree/gh-pages/static, My project Repository: https://github.com/PatrykWajs/web-dictaphone).
Thank you for your time.
I managed to combine both repositories as one project but I cannot understand why things cannot connect. I tried various things.
There Aren't Any errors. It seems like everything is working fine but streaming voice input is not connected to filters and visualization.

I think it is working just fine; however, you're getting hit by the autoplay restrictions. If you open up the developer console, you'll see a message that tells you so - the AudioContext will not go into playing mode unless it's created (or resume()ed) from within a user gesture (like a click handler). You can test this by going into the site settings (right-click the lock icon in the address bar, click "Site Settings") and set "Audio" to "Allow" instead of "auto".

Related

Access Drawn Features on Map with Puppeteer?

I'm starting to experiment with Puppeteer and it looks great for some of our needs to create screenshots and PDFs, but I'm trying to determine if I can use it to access features a user has drawn on a map (we're using OpenLayers but also applies to Google Maps, Leaflet, etc).
Example:
I can get a PDF of https://openlayers.org/en/latest/examples/draw-and-modify-features.html but have no idea where to start with (or if I even should use) Puppeteer to capture what the user has drawn or if they've re-centered the map or anything else.
Edit:
Added Example Image of User Interaction. Can I capture the elements drawn with Puppeteer or would I need to save and remake them through Puppeteer commands if doing a PDF with Puppeteer.
My (still growing) understanding of Puppeteer is that you have to remake the user interaction. This can be done by saving the features (to a cookie, file, database, etc.) and recalling them in an on load type event that is called when you want to use Puppeteer. But since Puppeteer sends a new request to the page from the server it doesn't know about what the user has done at that point.

Running HTML5 webkitSpeechRecognition API programmatically

I am interested in running the webkitSpeechRecognition API programmatically. I want to take an audio file that is uploaded to a server and use the webkitSpeechRecognition API on the back-end to recognize the text and return the result to the client.
One possibility is running some form of "embedded" version of Chrome, but I'm not sure how I would pass in the audio input. Another would be to use some form of C++ bindings to access the API, but I'm not sure if this is overly complicated.
Is this possible? How could this be accomplished?
I've done this before, but not on any large scale. I used this software,
http://vb-audio.pagesperso-orange.fr/Cable/index.htm
that I found from this link
Play audio as microphone input
With that you can recognize anything you play through your speakers, the program makes a virtual mic that streams audio from virtual speakers that it creates.
As far as your embedded version of chrome, you could try grabbing the chromium source and replacing the code where they read from the mic with code to read from a file, I don't know how far you are going to get with that though, I've never read that code.

getting videos in google drive (and Microsoft onedrive) to show in a video tag?

I have a website that has a web page with a html5 video-tag, and the user can supply a URL, and it will play in the video-tag.
The webpage uses JavaScript commands that control the video-tag - for instance, it can pause the video, move to a different point in the video, etc.
It works fine with the cloud. Videos stored on Microsoft Azure can be used, for instance (Azure gives you a way to get a URL to any video on your cloud storage, and streams it too).
However, I have users that store videos on Google-drive, and also on Microsoft One-Drive.
From what I can see, I can play these videos, but only in a page (probably with Google's own player in it) on their site.
It seems that there is no way to get a URL to these videos that I can put in a video tag.
Without the ability to do that, I can't use the javascript commands that work with the html5 video-tag.
Is there any workaround?
Or am I missing something?
Thanks.
For playing videos that are stored in google-drive using your app:
you need oauth2 credentials to access the user's drive, but assuming you have the oauth part covered :
you can create a drive application as a google appengine app and deploy it in a part of your website.
enable the drive-sdk and set the open-url to your website (that you have verified)
-> basically this tells drive to redirect towards your website whenever the user clicks on the video (from his drive)
when drive redirects to your website a json file will be sent, you'll have informations such as fileId from there i think you can execute the method files().get() to retrieve the necessary information for you to play the video
I advise you to take a look at this course in codeschool.

BackgroundAudioAgent with a dynamic playlist

I am developing an app that has the capability of playing audio tracks streamed from a server. This app needs to be able to playback the audio even when the screen is locked or if the app is put in background.
=====
For background audio playback in windows phone, the Background Audio Agent is required.
The sample provided by Microsoft shows the basics: http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh202978(v=vs.105).aspx
In the sample, the background audio agent has a list of static tracks and when the user taps the skip/prev in the main project, it simply forwards it to the singleton BackgroundAudioPlayer object which in turn uses the event handler in BackgroundAudioAgent project to load the next/prev song.
BUT, what I would think a common usecase is that the main project will have the details of the play list (like a list retrieved from server) and we just need the backgroundaudio agent to forward that request to the main project.
My question is
Is there ANY way to forward the user action from audio agent to the main project so the main project can determine which track to play?
P.S: I cannot use MediaElement (which seemed to work fine in Window Store App and provides background support) because in Windows Phone SDK it has no background support.
EDIT: When the screen is locked, The application itself can be terminated even if the background agent is running, so I guess there no mechanism to forward the request to the app. This would mean, The background agent has to be self sufficient... Which would be a poor design to have to jump through hoops for a seemingly common behavior (Playing audio stored in remote server that requires authentication) .
At this point, I am considering writing all URL specific information to a file and have the background audio agent read that saved file and authenticate with server and create audio tracks. But the handshake to show the current audio information when the application resumes will be complex to say the least.
I hope I am wrong with this and there is actually a easier way than this. Would love to see how others have handled this.

Chrome Extension Development - need help getting started

I'd like to try my hand at some Chrome Extension Development. The most I have done with extensions is writing some small Greasemonkey scripts in the past.
I would like to use localStorage to store some data and then reveal the data on a extension button click later on. (Its seems like this would be done with a popup page)
How do I run a script everytime a page from lets say http://www.facebook.com/* is loaded?
How do I get access to the page? I think based off my localStorage requirement I would have to go down the background_page route (correct?) Can the background page and popup page communicate across the localStorage?
UPDATE:
I'm actually looking to learn the "Chrome way". I'm not really looking to run an existing Greasemonkey script
Google actually has some pretty good documentation on creating extensions. I recommend thoroughly reading the following two articles if you haven't already done so:
http://code.google.com/chrome/extensions/getstarted.html
http://code.google.com/chrome/extensions/overview.html
If you want to give your extension access when the user browses to Facebook, you'll need to declare that in the extension's manifest.
Unless you're wanting to save data beyond the life of the browser process, you probably don't need to use local storage. In-memory data can just be stored as part of the background page.
Content scripts (which run when you load a page) and background pages (which exist for the duration of the browser process) can communicate via message passing, which is described here:
http://code.google.com/chrome/extensions/messaging.html
Overall, I'd suggest spending some time browsing the Developer's Guide and becoming familiar with the concepts and examples.
Chrome has a feature to automatically convert greasemonkey scripts to extensions!