Can i Grab the only sound louder than a certain level in html5? - html5-audio

I need to catch only sound that louder than a certain level.
Is it possible?
I now about getUserMedia, but haven't found any information addressing this scenario.

Yes, you can use Web Audio to detect volume level (I did a code example of this: https://github.com/cwilso/volume-meter), and use that to switch on recording of a sound (e.g. using the relatively new MediaRecorder API, or a library like RecorderJS (https://github.com/mattdiamond/Recorderjs). You'll need to make a bunch of code decisions yourself (like, does it automatically switch off? What level turns it on? Instantaneous volume level, or average volume over time?)

Related

dynamically changing the playback speed of a sound in as3 without altering pitch

I'm making a game based on simple song creation, and I'm planning on having a feature where players can listen to the songs they've created with the game. the rhythm of the melody is controlled with a system of timers, but this will not work for the backing track presets I am planning on implementing, as each mp3 file in the backing tracks represents one bar instead of one note.
while it would be possible to use my timer system for playing the backing tracks, this would require several more audio files, much more coding and would push the project far behind schedule. therefore, i need to manipulate the playback speeds of the files I already have. I've commonly seen two examples of how to do this, here: http://2008.kelvinluck.com/2008/11/first-steps-with-flash-10-audio-programming/ and here: http://blog.andre-michelle.com/2009/pitch-mp3/
the problem with both of these is that they also alter the pitch as well. this is a problem for me, as I would very much like players to be able to alter the pitch and tempo of their songs separately. I think the code I need is similar to the examples above, but I'm having trouble understanding those since I haven't had much experience with bytearrays and such. I'd like to be able to understand the examples i included so that I can figure out what I need to do in order to get my game working the way it should, but help of any sort is appreciated. thank you =)
You can try https://github.com/also/soundtouch-as3
there is an early (alpha) demo at http://static.ryanberdeen.com/projects/soundtouch-as3/demo/player/stretch.swf
quality is "acceptable" if you are using 1.0x-1.5x factor (less than 1.0 gives very artificial distortion)
You could also try :
http://iq12.com/old_blog/2009/08/25/real-time-pitch-shifting/
test : Online demo
Their online demo didn't load a track for me (deleted MP3?), so I put a recompiled SWF on my server just for testing. It loads this MP3 audio clip, if you wanna compare results to original sound.
It aims to preserve audio length (time scale) whilst adjusting pitch (deeper when slow or higher when faster). You could possibly combine this with Kelvin Luck's Second Steps.... Thereby having example codes for speed and pitch.
It was inspired by some C# code ported from this C code as found at (with concept explanation) : http://blogs.zynaptiq.com/bernsee/pitch-shifting-using-the-ft/

How to play multiple AudioBufferSourceNode synchronized?

I have multiple audio files that must be played in sync. I have read that Web Audio API is the best solution for this. But, I can't find any document that shows how to achieve this.
Almost all articles I have read do this to start playback.
//Let's say I have AudioBufferSourceNode connected to two buffers
var source1, source2;
source1.start(0);
source2.start(0);
Shouldn't this cause source2 to start playing slightly later than source1?
Also, what makes the sources stay in sync? I can not find any mention in any documentation that assures that sources are played in sync.
Thanks.
There is a single clock for the audio context, and the buffer playback is on that clock - so yes, they will stay in sync.
Even calling start(0); start(0); as above will be perfectly synchronized, because start() is setting up a scheduling request on the audio thread, and the actual scheduling of both of those will happen together. "now" is actually slightly in the future (the audio system latency).
You can schedule them slightly in the future.
var source1, source2;
var when = context.currentTime + 0.01;
source1.start(when);
source2.start(when);
That'll schedule both sounds to play exactly 10ms from the moment you define when. It's quick enough that it'll be perceived as immediate, but gives a bit of breathing room for the overhead of actually calling start on the first source node.
There are better ways to do this if you have a ton of nodes, but for simple situations, this should be fine.

Memory-effective sound management in Flash

As far as I've read in the manuals, when you have to play a sound, you need a Sound object, and make a temporary SoundChannel object that will control the actual playback. I want to know if there is a memory-efficient way of managing those SoundChannel objects. So far it seems that these are of "fire-and-forget" type of objects, and the only way to make them semi-persistent is make the call to Sound.play() with a really great number of plays. But this approach will not work for one-time sounds, like an arrow shot or a button click, for example. And if I call SoundChannel.stop() I can as well discard the object as there is no means to make it resume playing. Is there any solution to not to spawn SoundChannel objects like crazy, and to be able to handle both one-time sound plays and infinite-time sounds aka background music?
SoundChannel is indeed meant to be throw-away, and this kind of heap usage comes with the territory of using a language like ActionScript.
You shouldn't worry about GC usage from sounds - premature optimization is evil! The best you can do is just reuse your Sound object instead of creating a new one each play. There shouldn't be much of a GC issue if you are playing a reasonable number of sounds per frame, say, in a game. SoundChannels are lightweight and reference a single copy of the audio data, so they aren't such a big deal. There will probably be much heavier allocations to worry about, such as game objects or bitmaps.
You could avoid using SoundChannel by dynamically mixing the audio using SampleDataEvent, but this will certainly have the opposite effect and be much more processor-intensive, not to mention more difficult to code.
If you are really worried about the GC, you could use System.pauseForGCIfCollectionImminent method to hint the GC to run during a non-intrusive time, such as during a transition in a game.

Syncing two AS3 NetStreams

I'm writing an app that requires an audio stream to be recording while a backing track is played. I have this working, but there is an inconsistent gap in between playback and record starting.
I don't know if I can do anything to make the sync perfect every time, so I've been trying to track what time each stream starts so I can calculate the delay and trim it server-side. This also has proved to be a challenge as no events seem to be sent when a connection starts (as far as I know). I've tried using various properties like the streams' buffer sizes, etc.
I'm thinking now that as my recorded audio is only mono, I may be able to put some kind of 'control signal' on the second stereo track which I could use to determine exactly when a sound starts recording (or stick the whole backing track in that channel so I can sync them that way). This leaves me with the new problem of properly injecting this sound into the NetStream.
If anyone has any idea whether or not any of these ideas will work, how to execute them, or some alternatives, that would be extremely helpful! Been working on this issue for awhile
The only thing that comes to mind is to try and use metadata, flash media streams support metadata and the onMetaData callback. I assume you're using flash media server for the audio coming in and to record the audio going out. If you use the send method while your streaming the audio back to the server, you can put the listening audio track's playhead timestamp in it, so when you get the 2 streams back to the server you can mux them together properly. You can also try encoding the audio that is streamed to the client with metadata and try and use onMetaData to sync them up. I'm not sure how to do this, but a second approach is to try and combine the 2 streams together as the audio goes back so that you don't need to mux them later, or attach it to a blank video stream with 2 audio tracks...
If you're to inject something into the NetStream... As complex as SOUND... I guess here it would be better to go with Socket instead. You'll be directly reading bytes. It's possible there's a compression on the NetStream, so the data sent is not raw sound data - some class for decompressing the codec there would be needed. When you finally get the raw sound data, add the input in there, using Socket.readUnsignedByte() or Socket.readFloat(), and write back the modified data using Socket.writeByte(), or Socket.writeFloat().
This is the alternative with injecting the back into the audio.
For syncing, it is actually quite simple. Even though the data might not be sent instantly, one thing still stays the same - time. So, when user's audio is finished, just mix it without anything else to the back track - the time should stay the same.
IF the user has slow internet DOWNLOAD, so that his backtrack has unwanted breaks - check in the SWF if the data is buffered enough to add the next sound buffer (usually 4096 bytes if I remember correctly). If yes, continue streaming user's audio.
If not, do NOT stream, and start as soon as the data catches back on.
In my experience NetStream is one of the most inaccurate and dirty features of Flash (NetStream:play2 ?!!), which btw is quite ironic seeing how Flash's primary use is probably video playback.
Trying to sync it with anything else in a reliable way is very hard... events and statuses are not very straight forward, and there are multiple issues that can spoil your syncing.
Luckily however, netStream.time will tell you quite accurately the current playhead position, so you can eventually use that to determine starting time, delays, dropped frames, etc... Notice that determining the actual starting time is a bit tricky though. When you start loading a netStream, the time value is zero, but when it shows the first frame and is waiting for the buffer to fill (not playing yet) the time value is something like 0.027 (depends on the video), so you need to very carefully monitor this value to accurately determine events.
An alternative to using NetStream is embedding the video in a SWF file, which should make synchronization much easier (specially if you use frequent keyframes on encoding). But you will lose quality/filesize ratio (If I remember correctly you can only use FLV, not h264).
no events seem to be sent when a connection starts
sure there does.. NetStatusEvent.NET_STATUS fires for a multitude of reasons for NetConnections and Netstreams, you just have to add a listener and process the contents of NET_STATUS.info
the as3 reference docs here and you're looking for NET_STATUS.info

AS3: Audio activity level of a NetStream

I'm pulling my hair out (once again), trying to find a way to read the activity level of audio of a NetStream, similar to how you can do it with a Microphone. I'd hate to have to let each client send it's activitylevel through SharedObjects or the like, which right seems to be the only way to actually get it to work.
Thanks so much in advance!
-Dave
In AS3 the Netstream Object has a property called .info. This holds the object NetStreamInfo. NetstreamInfo will give you all sorts of metrics. Among them is the property 'audioBytesPerSecond' which will give you an indication of the audio activity at a certain point in time. Requesting the NetStreamInfo for the incoming stream will provide you with the data from the client. Requesting the NetStreamInfo for the outgoing stream will provide you data from your own cam and mic activity. More detail on the NetStreamInfo object can be found here: http://help.adobe.com/en_US/AS3LCR/Flash_10.0/flash/net/NetStreamInfo.html
NetStreamInfo.audioBytesPerSecond is unreliable.
Being a per second average, it takes one extra second before you could detect the lack of sound.
You should instead use NetStreamInfo.audioByteCount. As from Adobe documentation:
Specifies the total number of audio bytes that have arrived in the queue, regardless of how many have been played or flushed. You can use this value to calculate the incoming audio data rate, using the metric of your choice, by creating a timer and calculating the difference in values in successive timer calls.
That's probably the only way to do it. NetStream is a pretty generic object by design. The best site to ask a question like this might be on FlashComGuru.com where a lot of NetStream/FMS guys hang out.
I've seen Flex examples that displays an eq bar.
the code uses two objects
SoundTransform and flash.media.SoundChannel
it functions by dispatching a custom event that has a property which is the SoundChannel object containing the EQ of the playing audio stream.
not sure exactly how the doe works cause it's bundled up in flex, or how to get from the NetStream to the audio based SoundChannel.
The example is in chapter 15 "Building your own components" of the book "Flex 3 component solutions" by jack herrington. published by friends of ed (the pink spine books).
hope that helps.