I implemented a flex application to play an incoming video stream from a Red5 Media Server.
private function playStream(streamName:String, offset:int):void {
stream = new NetStream(connection);
stream.play(streamName + ".flv", offset);
var streamVideo:Video = new Video();
streamVideo.attachNetStream(stream);
display.addChild(streamVideo); }
The playStream method plays a given stream from the position which is defined by offset parameter. Now I want to update my page content depending on the played video stream. Or more specifically I want to call an actionscript method that updates the content, after each minute in the video. Should I use Timer for that reason?
Best regards
Yes, you will need to user a Timer object. But don't use the Timer to determine where the user is at in playback of the video. You should use the time property of the NetStream instead.
You should also add an event listener for the NetStatusEvent in your playStream() method. In particular, you want to inspect the info property of this event (technically it's the info.code property). This has several useful messages that you will want to use to know: when the video playback starts/stops/pauses, when the user performs a seek, and so on. This way you can manage your Timer and update the UI efficiently when the user interacts w/the video player.
Some of the relevant codes on the NetStatusEvent are below. But inspect the full list, you might find others that will help you.
NetStream.Pause.Notify (the user paused playback, start/stop the timer here as appropriate)
NetStream.Play.Start (playback started, start the timer)
NetStream.Play.Stop (playback stopped, stop the timer)
NetStream.Seek.Notify (user seeked to a new point, update the UI)
Related
I have a TVML/TVJS app that presents a document with a number of playable items. Each item is a lockup element with an event handler to launch the built-in media player, very much like in the example project:
https://developer.apple.com/documentation/tvmljs/playing_media_in_a_client-server_app
In the example code, the event handler creates a new Player object from scratch every time it is triggered, but I would like the player to be resumable: when the user exits the player (e.g. with the menu button) and returns by selecting the item again, I would like to resume the player where it left off.
Before, I would do this by creating Player objects for each item already when the document is loaded (including Playlist and MediaItem), and just execute player.select() or player.play() in the event handler. That would work well.
Since tvOS 14, creating all these Player objects when the document loads seems to overload the app (perhaps it already starts fetching all these items from the network). So I no longer create the Player objects beforehand, but I check in the event handler if I already have a Player for the item, and I create it when it's the first time, otherwise I reuse the Player object.
But even though I checked that I reuse the existing Player object, calling play() or present() causes the playback to restart from the beginning. So what would be the appropriate way to obtain a resumable player?
I am working with MediaSource and SourceBuffer to play html5 video. I am sequentially fetching DASH fragments to continue uninterrupted video play. But sometimes, due to network conditions, SourceBuffer runs out of data to continue play. When that data arrives play resumes. But between this period, video looks stalled. I want to add some visual indication over media element, that it is paused as its buffering required data.
I tried binding 'waiting' and 'stalled' events on video, but none of the events get fired.
var vid = $('video')[0];
vid.addEventListener('stalled', function(e) { console.log('Media stalled...');})
Is there any other way to know whether media has been stalled and when it resumes back?
Thanks.
Using the stalled event is correct, but unfortunately, it does not always work as expected.
You could use Media Source Extension which gives you full control of the buffer and allow you to detect stalls manually. However, a solution using that is a bit out of scope here IMO.
You could possible get around using the timeupdate event as well.
Have a setTimeout() running with a time-out value
Inside the timeupdate event, clear this timer and set a new
If the timer isn't reset, it means there is no time progress and if not paused or ended, assume stalling
Example (untested):
...
var timerID = null, isStalling = false;
vid.addEventListener("timeupdate", function() {
clearTimeout(timerID);
isStalling = false;
// remove stalling indicator if any ...
timerID = setTimeout(reportStalling, 2000); // 2 sec timeout
});
// integrate with stalled event in some way -
vid.addEventListener("stalled", function() {isStalling = true})
function reportStalling() {
if ((!vid.paused && !vid.ended) || isStalling) { ... possible stalling ... }
}
...
You may have to do some additional checks to eliminate other possibilities and so forth, but this is only to give you the general idea, and in addition to using the stalling event.
A different approach could be to monitor the loaded buffer segments using the buffered object (see this answer here for example on usage).
These can be used to see if you have any progress, then use currentTime compared with the ranges to see if the time is at the end of a range and/or if the ranges are changing at all.
In any case, hope this give some input.
Here is the reference http://www.w3schools.com/tags/ref_av_dom.asp
I think you are looking for the event suspend: fires when the browser is intentionally not getting media data
I am recording from a webcam to AMS in an AS3 project and to get the volume level from the microphone I have to attach the microphone to a NetStream. Later when the user initiates recording the NetStream.time value counts from when the camera was attached and not from when NetStream.publish was called. If they stop the recording and record again, now NetStream.time starts from 0. So far the only way to get round this seems to be call publish and then close on the NetStream as soon as the microphone is attached. The docs for AS2 NetStream mention this fact and suggest to call NetStream.publish(false) which doesn't work in AS3, neither does just calling publish with no args.
ns = new NetStream(nc);
ns.attachCamera(cam);
ns.attachAudio(mic);
Then later
ns.publish(filename,"record");
trace(ns.time);
is the elapsed time between attaching the camera and calling publish for the first time.
The only solution I have so far is
ns = new NetStream(nc);
ns.attachCamera(cam);
ns.attachAudio(mic);
ns.publish(filename,"record");
ns.close();
the when the user starts the reording
ns.publish(filename,"record");
trace(ns.time);
ns.time is now zero. Am I missing something, is there a better solution?
You can use mic.setLoopBack(true), wich will route microphone activity to your speakers. You will now be able to see activityLevel. But then you will probaly want to set a soundTransform on the mic with volume 0, so the mic will be effectivetly muted.
Basically.
mic.setLoopBack(true);
var transform:SoundTransform = new SoundTransfrom();
transform.volume = 0;
mic.soundTransform = transform;
After you stop displaying activity level, make shure you remove the transform.
My Facebook app is a flash game. I want the game swf to save its latest state to the server when the window unloads. Since I embed the swf with swfobject, I use its embed handler to add a onbeforeunload listener to window:
function embedHandler(event)
{
shell=event.ref;
window.onbeforeunload=function(event)
{
shell.message("save", null);
//delay the unloading a bit so flash has time to contact server
var now = new Date().getTime();
var later=now+50;
while (now < later)
{
now = new Date().getTime();
}
}
}
Here's the problem. This works every time when the swf is loaded directly from the app (a rails app). It never works when the swf is loaded from Amazon.
All the cross-domain issues are worked out between the swf and the app--the rails app accepts calls from Amazon swf, and the Amazon swf loads data from the rails app.
ExternalInterface also works for both outgoing and ingoing calls. But I suspect this is nonetheless a browser security issue, since the inward-going ExternalInterface call only fails when:
it is called from inside the window.onbeforeunload handler
the swf originates from Amazon.
What is the problem? How does one unobtrusively save game state when the game is from a CDN and the save is triggered by onbeforeunload in Javascript? Or is there a better way to accomplish this same thing?
Testing in Firefox.
ExternalInterface also works for both outgoing and ingoing calls. But
I suspect this is nonetheless a browser security issue, since the
inward-going ExternalInterface call only fails when:
it is called from inside the window.onbeforeunload handler
the swf originates from Amazon.
From the sounds of it you worked out all the security issues.
It is more likely a lack of understanding on your part on what is going on behind the scene when onbeforeunload is triggered.
This is a function that will not wait for your "game.swf" to finish the call back via ExternalInterface.This is why you added a stalling mechanism to delay that process. However, I will assume here that this works from the rails app because that is a local server and you are not subject to the lag monster.
Now you might be thinking well I put in a delay it should work. Well that delay is on 50 milliseconds. Try increasing to to 5000(5 seconds) and you should see it start to work on the cdn.
The saving of data should be controlled via the flash app and not triggered by an outside source.
In the game itself you should have milestones that should trigger a save event.
In closing I do want to add that is by far the worst method you could use to save information up to a server. onbeforeunload is unreliable and is subject to cross browser issues let alone putting a lag loop in the JavaScript is just a bad idea and in the end just annoy your users to the point that they won't return.
I am writing a basic video player in Flash CS5 and Actionscript 3. For this basic player, I attach my NetStream to my NetConnection, then call the stream's .play() method to begin loading. Although I want the metadata and for the stream to begin buffering, I do not wish to start playing right away, so I immediately call the stream's .pause() method. Unfortunately, when I pause immediately, my stream's client's onMetaData event is not always called, so I don't necessarily get the total playtime of the loaded video.
As a workaround, I put the call to the "pause" method inside the onMetaData listener, but sometimes my video will have played a bit before receiving it's metadata, and will therefore continue to play until it does.
Is there a good way to stop my stream from playing, and still get my video metadata?
Okay, here's a neat little way of thinking about this differently... Do not attach your video object to your stream object right away. Start your stream playing while showing a "please wait" visual WITHOUT your video object being displayed. In your onMetaData listener, see if you have stored a duration previously. If not, assume this is the first call to onMetaData, store the duration, pause playback, seek the stream to 0, THEN attach the video object.
The user will see a "please wait" for just a sec, then the video will appear, paused & ready to be played with it's duration times as expected. The user will be completely unaware that the stream played forward a bit while they were waiting.
You should call pause when the NetStatusEvent.STATUS event NetStream.Play.Start is fired.
Update:
For very short streams (e.g. buffer > duration) NetStream.Play.Start is likely to get fired just before the onMetaData callback.
Before pausing on NetStream.Play.Start, check if metaData has been provided, if not don't pause straight but await onMetaData to pause (just set a flag, e.g pauseOnMetaData = true).