With the help of waveformjs.org I am able to draw waveform for a mp3 file. I can also use
wav2json to get json data for audio file hosted on my own server and I don't have to depend on soundcloud for that. So far so good. Now, the next challenges for me are
To fill the color in waveform as the audio playing progresses.
On clicking anywhere on waveform audio should start playing from that point.
Has anyone done similar? I tried to look into soundcloud js SDK but no success. Any pointers in this directions will be appreciated, thanks.
As to changing the color my earlier answer here may help:
Soundcloud modify the waveform color
As to moving the position when you click on the waveform you can do something like this:
Assuming you have a x position already from the click event adjusted relative to the canvas.
/// get a representation of x relative to what the waveform represents:
var pos = (x - waveFormX) / waveformWidth;
/// To convert this to time, simply multiply this factor with duration which
/// is in seconds:
audio.currentTime = audio.duration * pos;
Hope this helps!
Update
The requestAnimationFrame is optimized not only for performance but also power consumption as more and more users are on mobile devices. For this reason the browsers may or may not pause or reduce frame rate of rAF when in a different tab or when browser tab is invisible. This can cause a position based approach to render the waveform delayed or not at all when tab is switched.
I would recommend to always use a time based approach so neither FPS or other behavior will interrupt the drawing of the waveform.
You can force the waveform to be updated at the actual current time as luckily we can attach this to the currentTime property of the audio element.
In the rAF loop simply update the position like this using a "reversed" formula of the above:
pos = audio.currentTime / audio.duration * waveformWidth + waveformX
When the tab becomes visible again this approach will make sure the waveform continues based on the actual time position of the audio.
Related
So, I'm pretty much a beginner in Flash and Actionscript (using AS3, as I said in the title), and I'm trying to make a basic escape the room game. I haven't gotten far, and right now that's because every time I test my game (or publish preview it) the graphics get this annoying outline. Here it is when tested: http://i305.photobucket.com/albums/nn228/chokingondrama/flash.png
Every outline corresponds to some object present in the game, most of which have an alpha component of 0 since they're on different sides of the room. This didn't happen before, but once I added the code that allowed the player to change their view with the arrow (each viewpoint/wall is a different frame) these appeared.
It's a little different when published to HTML, basically it just gives each image a white background: http://i305.photobucket.com/albums/nn228/chokingondrama/html.png
Also, it would be nice if somebody could give me advice on how to make sure importing to flash won't result in lower quality.
Thanks in advance. If needed, I'll post any part of the code.
Some tips:
Don't set alpha to 0, instead use the visible property, setting movieclip.visible = false will make it a lot more efficient.
As for the importing and quality, after you import to stage or library, bring up the library (ctrl + l), and right click on the file you imported, go to properties. If it's an image, set compression to lossless, and allow smoothing.
For audio, go to file-> publish settings, and change audio stream and audio event (whichever you might use) to 128kbps.
As for your main question, I need more info, if you want you can post your source. It might be because of how you are placing your graphics on the stage.
For each of your MovieClips in question:
Try disabling button mode and see if the rectangles go away.
movieClipName.buttonMode = false;
If that doesn't help, or you really want button mode, try setting
movieClipName.tabEnabled = false;
There's a chance that since you added keyboard interaction each of your MovieClips are now expecting to be selected by the user when they press the tab key, much like any normal web form.
tabEnabled in the docs
You could also try
movieClipName.focusRect = false;
focusRect in the docs
I am new to actionScript programming. I know some html and I am currently learning html5. I need to do an interactive video by putting html content in a specific time of the video. I'll be more concise:
For example, I have a video that is 5 minutes long, let's suppose that from the second 3:50 to 4:00 I need to display two boxes over the video, each one representing one choice. If at 3:50 the video shows the possibility to the viewer to select among two paths (the video told the user to select among those paths for instance) the viewer will have the possibility to select one of the paths by clicking on one of the two boxes that will appear in that time interval. I know this needs to be made with the tag and with hyperlinks.
My question is How do I tell the html5 video player to display a canvas from the minute 3:50 to the minute 4:00 in which two hyperlinks will display??
Thanks for your attention I will appreciate very much your help. I need some kind of guidance because I have been looking for many days.
For your use case it seems you want to be able to control the video flow of the user through interactions that jump to different times in the video.
Using html5 video player to seek to a different time in a video (using currentTime) you could create a click event on a box that you lay on top of the video and set the time when you click that box, using:
// Jump 30 seconds into the video
var time = '30';
var video = document.createElement('video');
video.src = "video.mp4";
// Set the time
video.currentTime = time;
video.play();
You can check out how we created an interactive video authoring tool(open source) using html5 and JS and use that.
If you don't want to spend time coding an interactive video you should check out H5Ps authoring tool through this simple example. You can test out creating your own at H5P.org as well. The tool is completely free.
I may be wrong, but I believe that you mean javascript instead of actionscript. If that is the case then I would definitely check this out Video.JS.
When you reach the current time you trigger your method/function which adds whatever you want on top of the video.
var whereYouAt = myPlayer.currentTime();
However, if you DO mean actionscript then you are working with a flash player. Therefore I suggest you take a look at this Vimeo Player
currentTime:Number [read-only] Returns the current playback time of the video.
I've been Googling around a bit for an answer and haven't found a definitive one either way: is it possible to play a video using an HTML5 canvas, and also allow the user to draw on this video? The use case, for some context, is to play a video on infinite loop so the user can draw multiple boxes over specific areas to indicate regions of interest.
As a bonus (:P), if I can figure out how to do this on its own, any hints as to how this could be done within Drupal? I'm already looking at the Canvas Field module, but if you have any hints on this point too (though the first one is the priority), that'd be awesome!
You can draw html5 video elements onto a canvas. The drawImage method accepts a video element in the first parameter just like an image element. This will take the current "frame" of the video element and render it onto the canvas. To get fluid playback of the video you will need to draw the video to the canvas repeatedly.
You can then draw on the canvas normally, making sure you redraw everything after each update of the video frame.
Here is a demo of video on canvas
here is a in-depth look into video and the canvas
I recently received this request from a client to provide this feature, and it must be CMS-friendly. The technique involves three big ideas
a drawing function
repeatedly calling upon the same drawing function
using requestAnimationFrame to paint the next frame
Assuming you have a video element already, you'd take the following steps
Hide the video element
Create a canvas element whose height/width match the video element, store this somewhere
Get the context of the canvas element with `canvas.getContext('2d') and also store that somewhere
Create a drawing function
In that drawing function, you would use canvas.drawImage(src, x, y) where src is the edited version of the current frame of the video;
In that drawing function, use recursion to call itself again
I can give you two examples of this being done (and usable for content management systems)
The first is here: https://jsfiddle.net/yywL381w/19/
A company called SDL makes a tool called Media Manager that hosts videos. What you see is a jQuery plugin that takes its parameters from a data-* , makes a request from the Media Manager Rest API, creates a video, and adds effects based entirely on data* attributes. That plugin could easily be tweaked to work with videos called from other sources. You can look at the repo for it for more details on usage.
Another example is here: http://codepen.io/paceaux/pen/egLOeR
That is not a jQuery plugin; it's an ES6 class instead. You can create an image/video and apply a cropping effect with this:
let imageModule = new ImageCanvasModule(module);
imageModule.createCanvas();
imageModule.drawOnCanvas();
imageModule.hideOriginal();
You'll observe, in the ImageCanvasModule class, this method:
drawFrame () {
if (this.isVideo && this.media.paused) return false;
let x = 0;
let width = this.media.offsetWidth;
let y = 0;
this.imageFrames[this.module.dataset.imageFrame](this.backContext);
this.backContext.drawImage(this.media, x, y, width, this.canvas.height);
this.context.drawImage(this.backCanvas, 0, 0);
if (this.isVideo) {
window.requestAnimationFrame(()=>{
this.drawFrame();
});
}
}
The class has created a second canvas, to use for drawing. That canvas isn't visible, it's just their to save the browser some heartache.
The "manipulation" that is content manageable is this.imageFrames[this.module.dataset.imageFrame](this.backContext);
The "frame" is an attribute stored on the image/video (Which could be output by a template in the CMS). This gets the name of the imageFrame, and runs it as a matching function. It also sends in the context (so I can toggle between drawing on the back canvas or main canvas if needed)
then this.backContext.drawImage(this.media, x, y, width, this.canvas.height); draws the image on the back context.
Finally, this appears on the main canvas with this.context.drawImage(this.backCanvas, 0, 0); where I take the backcanvas, and draw it on to the main canvas. So the canvas that's visible has the least amount of manipulations possible.
And at the end, because this is a video, we want to draw a new frame. So we have the function call itself:
if (this.isVideo) {
window.requestAnimationFrame(()=>{
this.drawFrame();
});
This whole setup allows us to use the CMS to output data-* attributes containing the type of frame the user wants to be drawn around the image. the JavaScript then produces a canvasified version of that image or video. Sample markup might look like:
<video muted loop autoplay data-image-frame="wedgeTop">
I'm on an AS3 project, playing a video (H264). I want, for some special reasons, to go to a certain position.
a) I try it with NetStream.seek(). There it only goes to keyframes. In my current setting, this means, i can find a position every 1 second. (for a better resolution, i'd have to encode the movie with as many keyframes as possible, aka every frame a keyframe)
this is definetly not my favourite way, because I don't want to reencode all the vids.
b) I try it with NetStream.step(). This should give me the opportunity to step slowly from frame to frame. But in the documentation it says:
This method is available only when data is streaming from Flash Media Server 3.5.3 or higher and when NetStream.inBufferSeek is true.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/NetStream.html#step()
Does this mean, it is not possible with Air for Desktop? When I try it, nothing works.
Any suggestions, how to solve this problem?
Greetings & Thank you!
Nicolas
Flash video can only be advanced by seconds unless you have Flash Media Server hosting your video. Technically, that means that you can have it working as intended in Air, however, the video would have to be streaming (silly adobe...).
You have two options:
1) Import the footage as a movieclip. The Flash IDE has a wizard for this, and if you're developing exclusively in non-FlashIDE environment, you can convert and export as an external asset such as an SWF or SWC. This would then be embedded or runtime loaded into your app giving you access to the per-frame steppable methods of MovieClip. This, however, does come with some audio syncing issues (iirc). Also, scrubbing backwards is not an MC's forté.
2) Write your own video object that loads an image sequence and displays each frame in order. You'd have to setup your own audio syncing abilities, but it might be the most direct solution apart from FLVComponent or NetStream.
I've noticed that flash player 9 scrubs nice and smooth but in players 10+ I get this no scrub problem.
My fix, was to limit frequency the calls to the seek function to <= 200ms. This fixed scrubbing but is much less smooth as player 9. Perhaps because of the "Flash video can only be advanced by seconds" limitation? I used a timer to tigger the function that calls seek() for the video.
private var scrubInterval:Timer = new Timer(200);
private function videoScrubberTouch():void {
_ns.pause();
var bounds:Rectangle = new Rectangle(0,0,340,0);
scrubInterval.addEventListener(TimerEvent.TIMER, scrubTimeline);
scrubInterval.start();
videoThumb.startDrag(false, bounds);
}
private function scrubTimeline(e:TimerEvent):void {
var amt:Number = Math.floor((videoThumb.x / 340) * duration);
trace("SCRUB duration: "+duration+" videoThumb.x: "+videoThumb.x+" amt "+amt);
_ns.seek(amt);
}
Please check this Demo link (or get the SWF file to test outside of browser via desktop Flash Player).
Note: Demo requires FLV with H.264 video codec and AAC or MP3 audio codec.
The source code for that is here: Github link
In the above demo there is (bytes-based) seeking and frame by frame stepping. The functions you want to study mainly are:
Append_SEEK ( position amount ) - This will got to the specified position in bytes and search for the nearest available keyframe.
get_frame_TAG - This will extract a tag holding one frame of data. Audio can be in frames too but lets assume you have video-only. That function is your opportunity to adjust timestamps. When it's run it will also append the tag (so each "get_frame_TAG" is also a "frame step").
For example : You have a 25fps video, you want the third-frame at 4 seconds into playback...
1000 milisecs / 25 fps = 40 units for each timestamp. So 4000 ms == 4 secs + add the 40 x 3rd frame == an expected timestamp of 4120.
So getting that frame means... First find a keyframe. Then step through each frame checking the timestamps that represent a frame you want. If it isnt then change it to the same as most recent keyframe timestamp (this forces Flash to fast-forward through the frames to keep things in sync as it assumes the frame [with smaller than expected timestamp] should have been played by that time). You can "hide" the video object during this process if you don't like the look of fast-forwarding.
I have this custom video player I'm making. I need some way to react when the externally played flv file reaches a certain point in the movie without embedding some extra data on the flv file. I am looking for this because I want to react at the 90%-99% point of the movie because I didn't like the behavior I'm getting when I react when the stream completes playing (I want a bit earlier). How do I achieve this?
I'm surprised Adobe didn't document what is the object structure passed on things like onMetaData and onCuePoint...
You can achieve this with a little math. Position of play head divided by duration multiply by 100.
If value is greater than 90 fire your event.
((p / d) * 100)
You can do this by programmatically setting a cue point based on the length the movie clip, then creating an event listener for it.
var endpoint:Number = flvPlayer.metadata.duration*.95; //95% of the video length
flvPlayer.addASCuePoint(endpoint, "endpoint");
flvPlayer.addEventListener(MetadataEvent.CUE_POINT, registerCuePoints);
function registerCuePoints(myEvent:MetadataEvent) {
if(myEvent.info.name == "endpoint") {
// you've reached your cue point, not something embedded in the video.
}
}