Is this possible in SL4 and/or Windows Phone to access the sound buffer once it is decoded and played in a MediaElement?
If, for instance, I'm playing an AAC file into a MediaElement, will there be a specific moment where the sound will be decoded in a stream and accessible via some C++ code?
Related
Background:
I am trying to build my own RTMP server for live streaming. (Python3+)
I have succeeded to establish an RTMP connection between the server and the client (using OBS). The client sends me the video and audio data. (audio codec: AAC, video codec: AVC).
I also built a website (HTTP), with HLS (hls.js) player (.TS & .M3U8) that works properly.
Goal:
What I need to do is to take the video and audio data (bytes) from the RTMP connection and join them into a .ts file container so it could be playable on my website.
Plan:
I planned to join them into an FLV file and then convert it to TS using FFmpeg. which actually worked with the audio data because FLV support AAC, so I generated FLV files that contained audio-only.
Problem:
The problem started when I tried to add the video data into the FLV file, but it is not possible because FLV does not support AVC.
What I tried to do:
I can choose another file container to work with, that supports AVC/AAC but they all seem really complicated and I don't have much time for this.
I tried to find a way to convert the AVC to H.263 and then put it inside the FLV (Because it does support H263) but I couldn't find a way.
I really need help. Thanks in advance!
I'm generating a 44100hz dynamic audio stream in Flash using a flash.media.Sound object and the SAMPLE_DATA event. I'd like to be able to analyze the output instead of just listening to it.
What would be the most straightforward way of converting my Flash stream of float samples to an audio file, in a standard format that can be opened by an audio editor? Is there any audio format that would be particularly suitable for this?
If you don't want to listen to it, there's no need to use Sound or the Event.SAMPLE_DATA at all. Just create the numbers and store them in a ByteArray or other data structure.
Is there any audio format that would be particularly suitable for this?
A format that can be opened by your audio editor would be preferable.
Otherwise, this totally depends on what you want to do with the sound data.
What would be the most straightforward way of converting my Flash stream of float samples to an audio file, in a standard format that can be opened by an audio editor?
To use an existing library that encodes the data into the specified format.
tonfall supports "various audio formats Wav AIFF RAW PCM (no header) "Encoder/Decoder
WaveEncoder from Nicolas Bretin apparently encodes to WAV
Of course, if you know the specification, you can write your own encoder.
It it possible to merge sound captured from microphone with an mp3 file selected and save as a new mp3 file in windows phone 8?
Does naudio have wp8 support??
Possible, yes. But you would have to decode the mp3 file into raw PCM data that matches the waveformat of the captured audio data, merge the pcm data (either an additive or mean merge or you'd simply add the captured audio data to the end of the pcm data from the mp3) and encode your new pcm data to MP3. There are plenty of libraries out there that support encoding/decoding of mp3 data and you might even find one that would let you decode/re-encode only the sub-sample of the mp3 file that you're interested in merging with captured audio data.
NAudio has, AFAIK, no WP8 support. I believe that they're currently working on Windows Store App support.
You would need to use WASAPI - which is a native-only API - or the Microsoft.XNA.Audio.Microphone-class to capture data.
Using flex,As can we record sound from microphone and store locally and and upload later?? Is there any relevant links to this please indicate
Check this link for how to use the microphone to record and use FileReference to save the recording
It is possible to record audio using the Microphone API SAMPLE_DATA event. The data property of the event is a ByteArray with the audio data.
A web based Flash application could copy these data samples to a data structure in memory and prompt the user to save the data to a local file. An AIR application would be able to write the data to the file system or SQL database directly.
See the links below for accessing audio from the microphone in ActionScript:
http://help.adobe.com/en_US/as3/dev/WS5b3ccc516d4fbf351e63e3d118a9b90204-7d1d.html
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/media/Microphone.html
It may help to encode the data in a standardised format to help with reading and editing later (WAV or PCM). You may also want to use compression to reduce the file size for transmission (eg: Ogg Vorbis codec from Adobe).
How can I embed a wav into as3/flash builder?
I have:
[Embed(source="assets/sounds/claps.wav")]
public var testSound:Class;
private var blahsound:Sound = Sound(new testSound());
But no luck...
You can't. Well, not directly.
Although there are various sound file formats used to encode digital
audio, ActionScript 3.0, Flash Player and AIR support sound files that
are stored in the mp3 format. They cannot directly load or play sound
files in other formats like WAV or AIFF.
You either need to convert it to an mp3 before embedding it. Or embed it as a ByteArray, and then try to use SampleDataEvent.SAMPLE_DATA to fill the sound buffer manually with the bytes from the wav file, but you are going to have to do some finagling.
It's possible, but hacky. As #32bitkid said, FP doesn't directly support loading sound files other than mp3. The solution is to load the wav as a ByteArray, construct a SWF in memory (as using the Flash IDE, you can add wav files), then access the Sound object from this SWF.
Check out http://richapps.de/?p=97
You can try the open source library, as3wavsound (AWS). It supports embedding .wav files and playing them natively.