How can merge two sounds and save as a new file?. One sound is a loaded mp3 file and the other from the microphone. Then I need to upload this sound into a server. Is this possible?
This all can be done, but if you looking simple example with few methods to call, I'm afraid it's not so easy.
You can extract bytes from sound with Sound.extract(). This data is sound amplitude in 16-bit numbers, right and left channel interleaved. Use ByteArray.readShort() to get them.
Microphone data can be captured with SampleDataEvent.SAMPLE_DATA, see example here. To mix them with song, just add sound amplitudes and write result into third array. The result will be essentially WAV-format (without header,) unpacked sound data. You can upload it raw, or search for "as3 mp3 encoder" (google), but these things are rare and written by entusiasts, so maybe you can get them work. Also, to mix sounds correctly, frequencies of data from mic and sound file must be equal.
And upload part - if this was file on disk, this would be easy - FileReference.upload(). But there's only data in memory. So you can look into Socket class to send it.
Related
I was wondering if there's a way to get streaming audio data (mp3, flac, arbitrary chunks of data in any encoding) from a server, process it and start playing after first chunk?
Background: All of our data is stored in compressed chunks, with a LAMP environment serving the data. de-compression and re-assembling is done client-side with
xhr-downloads, indexeddb, filehandle and/or chromefs. All currently
available audio/video functionality requires the audio to be
downloaded completely (otherwise decodeAudioData fails) or requires an
URL to the source without giving me a chance to process incoming data
client-side.
I am looking for a solution to squeeze my processing into the browser "inbuild" streaming/caching/decoding functionality (e.g. audio/video tag). I don't want to pre-process anything server-side, I don't want flash/java-applets and I'd like to avoid aligning data client-side (e.g. process mp3)
Question: Would it be possible to dynamically "grow" a storage that a bloburl points to? In other words: Create a filehandle / fileentry, generate a blobURL, feed it into an audio tag and grow the file with more data ?
Any other ideas?
Michaela
Added: Well, after another day of fruitless attempts, I must confirm that there are two problems in dealing with streamed/chunked mp3|ogg data:
1) decodeAudioData ist just too picky about what's fed into it. Even if I pre-align ogg-audio (splitting at "OggS" boundaries) I am unable to get the second chunk decoded.
2) Even IF I would be able to get the chunks decoded, how would I proceed playing them without setting timers, start positions or other head-banging detours? Maybe the webAudioAPI developers should take a look at aurora/mp3 ?
Added: Sorry to be bitching. But my newest experiments with recording audio from the microphone are not very promising either. 400K of WAV for a few seconds of recording? I have taken a few minutes to write about my experiences with webAudioAPI and added a few suggestions - from a coders perspective: http://blog.michaelamerz.com/wordpress/a-coders-perspective-on-the-webaudioapi/
Checkout https://github.com/brion/ogv.js. Brion's project chunk-loads an .ogv video and outputs the raw data back to screen through WebAudio API and Canvas, playing in the original FPS/timing of the file itself.
There is a StreamFile object in the codebase that handles the chunked load, buffering and readout of the data, as well as an example of how it is being assembled for playback through WebAudio.
I actually emailed Brion directly for a little help and he got back to me within an hour. It wan't built for exactly your use case, but the elements are there and I highly recommend Brion who is very knowledgeable on file formats, encoding and playback.
You cannot use <audio> tag for this. However, what you can use
Web Audio API - allows you dynamically construct audio stream in JavaScript
WebRTC - might need the streamed data pre-processing on the server-side, not sure
Buffers are recyclable, so you can discard already played audio.
How you load your data (XHR, WebSockets, chuncked file downloads) really doesn't matter as long as you can stick the raw data to a JavaScript buffer.
Please note that there is no universal audio format the browsers can decode and your mileage with MP3 may vary. AAC (mpeg-4 audio) is more supported and it has best web and mobile coverage. You can also decode AAC using pure JavaScript in Firefox: http://jster.net/library/aac-js - you can also decode MP3 in pure JavaScript: http://audiocogs.org/codecs/mp3/
Note that localStorage supports only 5 MB data per origin without additional dialog boxes, so this severely limits storing audio on the client-side.
Okay, basically we have the jRecorder implemented in our website which provides the ability for us to capture audio in WAV format.
Now, after the capture, we use the ShineMP3Encoder to convert that WAV to MP3 (to save on file size). This all works fine.
Numerous people have encountered an issue in that if the recorded audio levels are too high, MP3 encoding will completely stop and the file will become corrupt/short. When performing this with a WAV, it seems the WAV doesn't care how loud the recorded audio is and will happily play it back as is.
I appreciate my question is incredibly niche, but after banging my head against the wall for days, this is my only other option.
For what it's worth, this is the ActionScript that was use to record (it's bog standard ShineMP3 implementation):
//recorder.output is outputted from jRecorder
mp3Encoder = new ShineMP3Encoder(recorder.output);
mp3Encoder.addEventListener(Event.COMPLETE, mp3EncodeComplete);
mp3Encoder.start();
One possibility is that the encoding is running slower than the loop on those tracks, causing an error.
Try making the encoder run slower and see if that fixes the error.
In the start() method of ShineMP3Encoder.as replace
timer = new Timer(1000/30);
with
timer = new Timer(150);
That's line 37 in the current code base.
I'm trying to make something similar to this:
http://www.personalwine.com/catalog/label_designer_app.php?templateId=5046&action=4C92&userId=0
in Flash IDE with AS3.
My problem is how to save all objects on a stage, save it as a "template" and reuse it again - not as images, but as objects that can be editable again.
Could anyone point me to the right direction on how to solve this problem.
Thanks in advance!
Maybe a xml save/load function could help. Once one created something on save, all attributes of each object are written to a xml file. If you want to recreate, then you parse the info and build the screen.
Where?
You have two choices for saving data, you can either save it on the user's computer (client-side) or on your own server (server-side).
On the server
If you're going to use anything that is server-side related well, obviously you're going to need a server (and a database). Using php with mysql is both free and very fast for this sort of usage (small). You might also want to look into node.js since it will probably come very intuitively to an actionscript user since node is javascript and the syntax and structure of node.js files and actionscript files are very similar.
On the user's pc
If you just want to store the data on the user's computer, you can use a SharedObject, it will save all the data you need (variables and such) on the user's computer.
Here is a short nice tutorial on how to do so:
http://kirill-poletaev.blogspot.com/2010/07/how-to-save-local-data-with.html
Here is a much bigger and more detailed tutorial:
http://active.tutsplus.com/tutorials/actionscript/movieclip-reconstruction-with-the-sharedobject-class/
Basically you can do this for all the variables you want to save (movieclip locations, etc) and then load them. It is very straightforward, you can even store a whole movieclip object.
i have some arbitrary signal data which i'd like to get a frequency analysis of. it's not audio data. is there a way to coerce AS3's computeSpectrum() call into doing this work for me ?
tia!
orion
The computeSpectrum function seems to take a sample of the currently playing audio on which to perform the FFT. So you would have to convert your non-audio data into a sound file of some type and play it. Since there appears to be no way to synchronize play and capture, you would have to loop your data many times in the sound file so it plays for long enough, and hope to get lucky to capture it using a computeSpectrum call. Very doubtful that this would work and give you meaningful results.
I would like to be able to
create a folder for the copied video frames
access the individual frames frome a .flv video file and/or .swf file
save these frames to the auto-created folder
I assume one would need to do this using Action Script 3 to scan through the .swf and .flv files and extract the frames.
Are there gudies on how to do this?
You need to know WHAT frames do you want to extract. For example:
extract 20 frames in regular interval from the video clip
extract frames at 15 seconds interval
extract frames at keyframes (scene changes)
I guess that you don't have to use as3 to extract frames, but can also create the script in some other language. Central piece to frame extraction could be ffmpeg, as described in this article.
If it is as3 solution i would do following
- make a loader which loads your fla/flv
- add enter frame event listener to it and on each frame draw loader object to buffer, if you ever done any loading and drawing, this will probably take you 10-20 minutes to set up.
This is pretty much the only straigt-forward solution if you're dealing with code-based animations, videos can be handled in different and easier ways i guess.
You will face the challenge of saving the output tho. Flash player can save images on your computer, however only by prompting you to save the file. You will need to use functions available only in Air player if you want to save anything without prompts.