Play same base64 data concurrently multiple times - html

I am making an all-client-side audio/music editor. I have created a few tones mathematically that are stored as base64 in the <audio> src-attribute. I can play DIFFERENT tones at the same time BUT, I can only play ONE instance of ONE specific tone at the same time.
For example clicking the key to play C like crazy will sound very awkward since the C that was playing gets stopped and the new C starts. I would like there to be possibility to play several C tones at the same time!
Now I guess this could be made by having by simple copying the audio element (one or more times) and make the keypress, sort of, cycle through them. For example if the first C tone is playing and the key to play C is clicked, then play the second C audio element, and so on and so forth.
That would work... but since I am using base64 in the source I would also have to have that copied.
<audio id="C1"><source src="data:audio/wav;base64,audio_data"></source></audio>
<audio id="C2"><source src="data:audio/wav;base64,audio_data"></source></audio>
If "audio_data" would be really long then the html would become humongous and also I think that the browser would not understand that both are actually the exactly same data, so it would be come very unoptimized.
So to the concrete question: Is there a way to play the same base64 data several times at the same time without the need of copying the whole src-attribute with the base64 string in it? (Since my application is all-client so I have not the ability to save the data to a sound-file on a server)
See a simple example. It might not work in other browsers than Firefox because I have not tested:
https://jsfiddle.net/tx3hpptL/

Related

how to load multiple songs/tracks into pygame?

Is there a way to load multiple songs into Pygame? I'm not talking about sound effects like this;
crash_sound = pygame.mixer.Sound("crash.ogg")
#and
pygame.mixer.Sound.play(crash)
because I know you can have multiple different sound effects, assigned to different variables, obviously. But I'm talking about the music function in pygame (this one):
pygame.mixer.music.load("chill_music.ogg")
#and
pygame.mixer.music.stop()
because you can't assign it to variables, or anything, you just set the mixer.music thing to one ogg file and can't have more.
I need this feature, because it let's me set it to a '-1' value making it play over and over again, which I'm pretty sure you can't do with the sound effects, but I want two different songs for two different levels.
I hope it makes sense.
Thanks
I need this feature, because it let's me set it to a '-1' value making it play over and over again, which I'm pretty sure you can't do with the sound effect
Actually, you can:
play()
begin sound playback
play(loops=0, maxtime=0, fade_ms=0) -> Channel
The loops argument controls how many times the sample will be repeated after being played the first time. ... If loops is set to -1 the Sound will loop indefinitely (though you can still call stop() to stop it).
So you can use the Sound class.
but I want two different songs for two different levels
Nothing stops you from calling pygame.mixer.music.load(...) again with another sound file. It will stop playing the current file and start the new one.
Note:
The difference between the music playback and regular Sound playback is that the music is streamed, and never actually loaded all at once...
So if your music files are rather big and you don't want to store them in memory, using pygame.mixer.music is the way to go. If you don't mind loading the files completely, you can use the Sound class.

Chrome: Wrong sound when changing the audio source for Audio element and MediaStreamAudioDestinationNode

I have a app where I play different code-generated sounds. I place these sounds in a AudioBufferSourceNode.
I allow the the user to choose what output device to play the sound through, so I use a MediaStreamAudioDestinationNode with its stream used as the source for an Audio Element. This way when the user chooses an audio output to play the sound to, I set the Sink Id of the Audio element to the requested audio output.
So I have AudioBufferSourceNode -> some Audio Graph (gain nodes, etc) -> MediaStreamAudioDestinationNode -> Audio element.
When I Play the first sound, it sound fine. But when I create a new source and connect it to the same MediaStreamAudioDestinationNode, the sound is played with the wrong pitch.
I created a Fiddle that shows the problem.
Is this a bug, or am I doing something wrong?
The problem was identified based on the OP Chrome Ticket.
It seems to come from the lack of sync between AudioElement and its source AudioNode (AudioBufferSourceNode, OscillatorNode, etc.) when you pause the source and play it back again.
The solution is to always call AudioElement.pause() and AudioElement.start() alongside your source stop and start.
https://jsfiddle.net/k1r7o0xj/3/
It's possible to dynamically change your graph layout by using .connect() and .disconnect(), even when audio is playing or sent through a stream (which could even be streamed over WebRTC).
I couldn't find a reference in the spec, so I'm pretty sure this is taken for granted.
For example, if you have two AudioBufferSourceNodes bufferSource1 and bufferSource2, and a MediaStreamAudioDestinationNode streamDestination:
bufferSource1.connect(streamDestination);
//do some other things here, and after some time, switch to bufferSource2:
//(streamDestination doesn't need to be explicitly specified here)
bufferSource1.disconnect(streamDestination);
bufferSource2.connect(streamDestination);
Example in action.
Edit 1:
Proper implementation:
According to the Editors Draft on the Audio Output API, it is planned/will be possible to choose a custom audio output device for the AudioContext as well (by means of new AudioContext({ sinkId: requestedSinkId });). I couldn't find any info on the progress, and even found a related discussion which the asker apparently read already. According to this and (many) other references, it doesn't seem te be an easy task, but it's planned for WA V1.
Edit:
That section has been removed from the API Draft, but you can still find it in an older version.
Current workaround:
I played around with your workaround (using a MediaStreamAudioDestinationNode and Audio object), and it seems to be related to nothing being connected. I modified my example to toggle a single buffer (similar to your example but with an AudioBufferSourceNode), and observed a similar frequency drop. However, when using a GainNode inbetween and setting it's gain.value to either 0 or 1, the frequency drops disappeared (this isn't gonna be the solution if you want to create and connect new AudioBuffers dynamically).

How does Youtube's HTML5 video player control buffering?

I was watching a youtube video and I decided to investigate some parts of its video player. I noticed that unlike most HTML5 video I have seen, Youtube's video player does not do a normal video source and instead utilizes a blob url as the source.
Previously I have tested HTML5 videos and I found that the server starts streaming the whole video from the start and buffers in the background the complete rest of the video. This means that if your video is 300 megs, all 300 megs will be downloaded. If you seek to the middle, it will start downloading from the seek position all the way to the end.
Youtube does not work this way (at least in chrome). Instead it manages to control buffering so it only buffers a certain amount while paused. It also seems to only buffer the relevant pieces, so if you skip around it will make sure not to buffer pieces that are unlikely to be watched.
In my attempts to investigate how this worked, I noticed the video src tag has a value of blob:http%3A//www.youtube.com/ee625eee-2802-49b2-a13f-eb374d551d54, which pointed me to blobs, which then led me to typed arrays. Using those two resources I am able to load a mp4 video into a blob and display it in a HTML5 video tag.
However, what I am now stuck on is how Youtube deals with the pieces. Looking at the network traffic it appears to sends requests to http://r6---sn-p5q7ynee.c.youtube.com/videoplayback which returns binary video data back in chunks of 1.1mb. It also seems worth noting that most normal requests due to HTML5 video requests seem to receive a 206 response code back while it streams, yet youtube's playvideo calls get a 200 back.
I tried to attempt to only load a range of bytes (via setting the Range http header) which unfortunately failed (I'm assuming because there was no meta-data for the video coming with the video).
At this point I'm stuck on figuring out how Youtube accomplishes this. I came up with several ideas though none of which I am completely sold on:
1) Youtube is sending down self contained video and audio chunks with each /videoplayback call. This seems like a pretty heavy burden on the upload side and it seems like it would be difficult to stitch these together to make it appear like it's one seemless video. Also, the video tag seems to think it's one full video, judging from calling $('video').duration and $('video').currentTime, which leads me to believe that the video tag thinks it's a single video file. Finally, the vidoe src tag never changes which makes me believe it is working with a singular blob and not switching out blobs.
2) Youtube constructs an empty blob pre-sized to the full video array and updates the blob with pieces as it downloads it. It would then make sure the user has not gotten too close to the last downloaded piece (to prevent the user from entering an undownloaded section of the blob). The problem that I see with this that I don't see any way to dynamically update a blob through javascript (although maybe I'm just having trouble googling for it)
3) Youtube downloads the meta data and then starts constructing the blob in order by appending the video pieces as it downloads them. The problem I see with this method is I don't understand how it would handle seeks in post-buffered territory.
Maybe I"m just missing an obvious answer that's right in front of me. Anyone have any ideas?
edit: I just thought of a fourth option. Another idea is they might use the file API to write the binary chunks to a file and use that file to stream off of. The file API seems to have the ability to seek to specific positions, therefore allowing you to fill a video with empty bytes and fill them in as they are received. This would definitely accommodate video seeking as well.
Okay, so few things you need to know is that YouTube is based on this great open source Project. It behaves different for every browser and if your browser supports more intensive decoding like WEBM it will use that to save Google's bandwidth. Also if you look at this Demo
Then you will find a section which downloads the entire video into a thing called "offline storage". I know chrome has it and some other browsers not every in some cases they do have to use the entire video source instead of a blob. So that blob is streaming depending on the user interaction with the video. Yes the video is just 1 file and they have metadata for that video like a little database that tells the time of the video and the points at which chunks can be divided in.
You can find out more by reading the Project's documentation. I really recommend you have a look at the demo.
When you look at the AppData of GoogleChrome, while playing a youtube video, you will see that it buffers in segmented files. The videos uploaded to youtube are segmented, which is why you can't perfectly pinpoint a timeframe in the first click on the bar if that timeframe is outside of the current segment.
The amount of segments depends on the length of the video, and the time from which you start and stop playing back the video.
When you are linked to a timeframe of a video, it will simply skip the buffering of the segments that come before that timeframe.
Unfortunately I don't know much about the coding for video playback, but I hope this points you in the right direction.
there is a canvas element in the page ,Maybe This Will Help
http://html5doctor.com/video-canvas-magic/
we knew the video is been segmented,the question is how to stitch them together.i think the real video element doesn't do the play work,it support the datasource,and draw the seagments each frame to the canvas element。
var v = document.getElementById('v');
var canvas = document.getElementById('c');
v.addEventListener('play', function(){
if(v.paused || v.ended) return false;
c.drawImage(v,0,0,w,h);
setTimeout(draw,20,v,c,w,h);
},false);
Youtube is using this feature only in browsers that support Media Source Extensions so it is up to the browser decide about all the rest because of this feature.

interactive video with html5

I am new to actionScript programming. I know some html and I am currently learning html5. I need to do an interactive video by putting html content in a specific time of the video. I'll be more concise:
For example, I have a video that is 5 minutes long, let's suppose that from the second 3:50 to 4:00 I need to display two boxes over the video, each one representing one choice. If at 3:50 the video shows the possibility to the viewer to select among two paths (the video told the user to select among those paths for instance) the viewer will have the possibility to select one of the paths by clicking on one of the two boxes that will appear in that time interval. I know this needs to be made with the tag and with hyperlinks.
My question is How do I tell the html5 video player to display a canvas from the minute 3:50 to the minute 4:00 in which two hyperlinks will display??
Thanks for your attention I will appreciate very much your help. I need some kind of guidance because I have been looking for many days.
For your use case it seems you want to be able to control the video flow of the user through interactions that jump to different times in the video.
Using html5 video player to seek to a different time in a video (using currentTime) you could create a click event on a box that you lay on top of the video and set the time when you click that box, using:
// Jump 30 seconds into the video
var time = '30';
var video = document.createElement('video');
video.src = "video.mp4";
// Set the time
video.currentTime = time;
video.play();
You can check out how we created an interactive video authoring tool(open source) using html5 and JS and use that.
If you don't want to spend time coding an interactive video you should check out H5Ps authoring tool through this simple example. You can test out creating your own at H5P.org as well. The tool is completely free.
I may be wrong, but I believe that you mean javascript instead of actionscript. If that is the case then I would definitely check this out Video.JS.
When you reach the current time you trigger your method/function which adds whatever you want on top of the video.
var whereYouAt = myPlayer.currentTime();
However, if you DO mean actionscript then you are working with a flash player. Therefore I suggest you take a look at this Vimeo Player
currentTime:Number [read-only] Returns the current playback time of the video.

Embedded Sounds cut off early

I have a combined Flash Builder/Flash Pro project. Because of the hassles involving in maintaining sound assets on the timeline, my sounds are all embedded into Class files, like:
[Embed (source="/mp3/Welcome_01_V.mp3", mimeType="audio/mpeg")]
private static const WELCOME_1:Class;
These files are then referenced by the base Classes for the symbols that need them, embedded for Actionscript on Frame 10 (because the second frame label is on Frame 10 to give space for you to read the first one).
What I'm finding is that a few of these Sounds don't play all the way through, but the SoundChannel dispatches the "soundComplete" event, and its final position matches the Sound's length.
All sounds are converted from wav to mp3 at 44Hz / 16 kbps. I faked out the compiler to avoid a reference to Flex by including a dummy SoundAsset that extends Sound.
I don't know what other steps to take to debug this. Is there a way to figure out whether the problem is on the compile side or on the run side?
Updated
More things I have tried:
Looked at the Size report: The nonworking sounds were smaller in
their embedded form than the source mp3
Got rid of my own BitmapAsset and let Flash link in the Flex Framework and do whatever that does (definitely worse)
Dropped the encoding from 44 kHz to 22 kHz (no improvement or worse)
Dropped the bit rate to 8kbps (the lowest dbPowerAmp, the tool I use, supports). This usually helps somewhat, but I still usually use a word or two from the end of the file
Dropped both parameters in the encoding. This helped a few that just dropping the bit rate didn't, but not all files. Plus it sounds tinny.
Thanks!
For Flash audio, I recommend importing the sound assets into a FLA using wav files if you have the high quality source wavs. Otherwise, you can consider converting your mp3 into a wav as well. Then set the FLA export settings to the quality you want and Flash will convert your wavs into its own format at the quality you set with hopefully less issues.
Once you do that, you can export the sound symbol for actionscript in your library and set a class name just like how you would embed it.
One other trick I use is I have one FLA just for sound assets which can be used to store as big waves as I want. And when I export that, it becomes a small SWF file which I can then embed in my main application. That way, I never have Flash reconvert the wavs to the swf every single time I export the swf. Instead it just copies the swf data which is much faster as well.
[Embed(source="Audio/Sfx.swf", symbol="WELCOME_1_WAV")]
private static const WELCOME_1:Class;
If you are having audio cut off issues in Flash Pro, you may want to check your frame rate.
I had an issue with sounds cutting off (in Flash pro CC 2014). My issue turned out to be related to the frame rate being set to 25 rather than the default 24. I have been using 25 to resolve an issue unrelated to anything in this project, so my solution was to change the FPS to 24, which invoked the necessity to move all of the synchronized animations to re-align with the corresponding audio.
Why long(ish) audio tracks get cut off at the end when the frame rate is at 25 regardless of using proper keyframing is a mystery. This solved the symptoms however, so if you are having audio cut off issues in Flash Pro, you may want to check your frame rate.
My symptoms were specifically when an audio clip was particularly long, and deep into the time line.
What worked for me: I opened the audio files in an audio editor and added a few seconds of silence to the end.
Good luck! - J.Hall