i have some arbitrary signal data which i'd like to get a frequency analysis of. it's not audio data. is there a way to coerce AS3's computeSpectrum() call into doing this work for me ?
tia!
orion
The computeSpectrum function seems to take a sample of the currently playing audio on which to perform the FFT. So you would have to convert your non-audio data into a sound file of some type and play it. Since there appears to be no way to synchronize play and capture, you would have to loop your data many times in the sound file so it plays for long enough, and hope to get lucky to capture it using a computeSpectrum call. Very doubtful that this would work and give you meaningful results.
Related
I have 4 wav files all designed to be played at the same time. I have created the 4 sound buffers and play them with the start function. All 4 sounds are played successfully but they are slightly out of sync with each other. I guess each call to start takes long enough for the subsequent call to be out of sync. Is there anyway I can force them to sync?
Thanks
Without seeing you code, these are the spontaneous ideas I have. Make sure to check the following:
Give the exact same start time to the start method (maybe save context.currentTime + a little bit in a var and pass it to the start method of each source node, don't pass context.currentTime to each source in a for loop or similar (the context time might update while that's executing, thus the time passed to each source is different)).
Make sure that the audio files are cut exactly the same in the beginning of the file. If you have 10 ms extra silence in one of the files, that file will always be 10ms off (unless you compensate for it in the code).
If none of that works/applies, please add some code to your question explaining what you're doing right now.
I am writing code to collect data from the Steam API (documentation: https://developer.valvesoftware.com/wiki/Steam_Web_API).
Particularly, I use the fromJSON() function from the jsonlite package in [R].
However, I have the feeling that the code is slow, and that the bottleneck is the actual calling of the API. I am currently able to do around 7.500-10.000 calls an hour, resulting in around 2-3 calls per second. This feels slow. Is it possible to speed this up, and if so, how?
Two things I found already is that it may be necessary to close the connection after opening it (cf. http://www.firaja.cc/steam-web-api-right-way.html). Also, the API allows json (what I use now) but also XML and CSV outputs, maybe it is better to use one of the latter two? Any other possible solutions?
I am trying to create functions to save and load objects. I am storing the objects in File.applicationStorageDirectory...and using File Streams. At some point I thought my code did work, and then it stopped soon after. It also saved the last string I inputed but none of the others. I have pastebinned the functions I think will be necessary, even if someone could point me in the right direction. I am aiming to publish this on an Apple iPad...
http://pastebin.com/61WLLUAB
I am in the process of learning to program, and appreciate any constructive criticism with my layout.
Thanks
I think flash.utils.registerClassAlias() and SharedObjects will be the key for loading/saving objects. Be sure to implement IExternalizable on all objects that will be processed while you save and load your game.
About why does your function not work - first, you don't actually save your state between main sessions, you instead initialize all the objects anew, write them to files and then read them back, thus erasing previous data. You need to try reading an object out of file system prior to instantiating them anew, for each of the charID.IDs you have there. You don't have your IDs instantiated when your Main constructor starts, so Main doesn't know if any of the objects were saved beforehand.
I was wondering if there's a way to get streaming audio data (mp3, flac, arbitrary chunks of data in any encoding) from a server, process it and start playing after first chunk?
Background: All of our data is stored in compressed chunks, with a LAMP environment serving the data. de-compression and re-assembling is done client-side with
xhr-downloads, indexeddb, filehandle and/or chromefs. All currently
available audio/video functionality requires the audio to be
downloaded completely (otherwise decodeAudioData fails) or requires an
URL to the source without giving me a chance to process incoming data
client-side.
I am looking for a solution to squeeze my processing into the browser "inbuild" streaming/caching/decoding functionality (e.g. audio/video tag). I don't want to pre-process anything server-side, I don't want flash/java-applets and I'd like to avoid aligning data client-side (e.g. process mp3)
Question: Would it be possible to dynamically "grow" a storage that a bloburl points to? In other words: Create a filehandle / fileentry, generate a blobURL, feed it into an audio tag and grow the file with more data ?
Any other ideas?
Michaela
Added: Well, after another day of fruitless attempts, I must confirm that there are two problems in dealing with streamed/chunked mp3|ogg data:
1) decodeAudioData ist just too picky about what's fed into it. Even if I pre-align ogg-audio (splitting at "OggS" boundaries) I am unable to get the second chunk decoded.
2) Even IF I would be able to get the chunks decoded, how would I proceed playing them without setting timers, start positions or other head-banging detours? Maybe the webAudioAPI developers should take a look at aurora/mp3 ?
Added: Sorry to be bitching. But my newest experiments with recording audio from the microphone are not very promising either. 400K of WAV for a few seconds of recording? I have taken a few minutes to write about my experiences with webAudioAPI and added a few suggestions - from a coders perspective: http://blog.michaelamerz.com/wordpress/a-coders-perspective-on-the-webaudioapi/
Checkout https://github.com/brion/ogv.js. Brion's project chunk-loads an .ogv video and outputs the raw data back to screen through WebAudio API and Canvas, playing in the original FPS/timing of the file itself.
There is a StreamFile object in the codebase that handles the chunked load, buffering and readout of the data, as well as an example of how it is being assembled for playback through WebAudio.
I actually emailed Brion directly for a little help and he got back to me within an hour. It wan't built for exactly your use case, but the elements are there and I highly recommend Brion who is very knowledgeable on file formats, encoding and playback.
You cannot use <audio> tag for this. However, what you can use
Web Audio API - allows you dynamically construct audio stream in JavaScript
WebRTC - might need the streamed data pre-processing on the server-side, not sure
Buffers are recyclable, so you can discard already played audio.
How you load your data (XHR, WebSockets, chuncked file downloads) really doesn't matter as long as you can stick the raw data to a JavaScript buffer.
Please note that there is no universal audio format the browsers can decode and your mileage with MP3 may vary. AAC (mpeg-4 audio) is more supported and it has best web and mobile coverage. You can also decode AAC using pure JavaScript in Firefox: http://jster.net/library/aac-js - you can also decode MP3 in pure JavaScript: http://audiocogs.org/codecs/mp3/
Note that localStorage supports only 5 MB data per origin without additional dialog boxes, so this severely limits storing audio on the client-side.
How can merge two sounds and save as a new file?. One sound is a loaded mp3 file and the other from the microphone. Then I need to upload this sound into a server. Is this possible?
This all can be done, but if you looking simple example with few methods to call, I'm afraid it's not so easy.
You can extract bytes from sound with Sound.extract(). This data is sound amplitude in 16-bit numbers, right and left channel interleaved. Use ByteArray.readShort() to get them.
Microphone data can be captured with SampleDataEvent.SAMPLE_DATA, see example here. To mix them with song, just add sound amplitudes and write result into third array. The result will be essentially WAV-format (without header,) unpacked sound data. You can upload it raw, or search for "as3 mp3 encoder" (google), but these things are rare and written by entusiasts, so maybe you can get them work. Also, to mix sounds correctly, frequencies of data from mic and sound file must be equal.
And upload part - if this was file on disk, this would be easy - FileReference.upload(). But there's only data in memory. So you can look into Socket class to send it.