How to read an HTML cookie in Flash? - html

I'm hoping to get pointed in the right direction here. The problem I'm having is trying to figure out how to read an HTML created cookie in Flash. I have a video player that should autoplay once in a 24hour period, the next day it should autoplay again for the end-user.
This is what the script on the HTML page looks like that displays the Flash player and the cookie:
<script type="text/javascript">
var so = new SWFObject("flvplayer.swf", "mymovie", "640", "394", "8", "#90ab69");
var x = readCookie('homepageIntro') // <- The Cookie (How do I read this in Flash?)
so.addParam("quality", "high");
so.addParam("wmode", "transparent");
so.useExpressInstall('expressinstall.swf');
so.addVariable("autostart", "false");
so.addVariable("file", "video.flv");
so.addVariable("key", "");
so.addVariable("showfsbutton", "false");
so.addVariable("noControls", "false");
so.addVariable("home", "true");
so.write("flashcontent");
</script>
Not knowing how to read that var x inside of Flash, I tried to get around having to use it by using a Flash cookie, however now the video player will only ever autoplay once and never ever autoplay again(unable to clear the Flash cookie).
public function sharedObjectCheck():void
{
if (mySharedObject.data.flashCookie == "true"){
//Code to NOT autoplay video
} else if (mySharedObject.data.flashCookie == null){
mySharedObject.data.flashCookie = "true"; //if first time, set the cookie value
mySharedObject.flush(); //add the cookie
}
}
I did some searching and found this Have a HTML page play Flash movie only once (not when revisited…) but again this is just a Flash function which never allows for a restart in a certain time period.
So my question to my fellow Flash stackers is how do I read that var x(cookie) in Flash?

A direct answer to your question is to use ExternalInterface
import flash.external.*
try {
var cookie : String = ExternalInterface.call("readCookie", "homepageIntro") as String;
} catch (error : SecurityError) {
trace("SecurityError:", e.message);
} catch (error : Error) {
trace("Error:", e.message);
}
You likely also need to set allowScriptAccess to let the call run.
Using a LSO is probably your best option, or passing in the value of x as a FlashVar.

Have you tried assigning the x value to flashvars?
so.addVariable("cookie", x);
In which case , you should be able to retrieve it in Flash , by doing so in the Document Class:
var params:Object = this.loaderInfo.parameters;
var cookie:Object = params.cookie;

Do it with a flash cookie, but actually store the last time autoplayed, rather than a simple flag.

Related

Autoplay stops after some time html5 video taken from google drive

I have a mp4 video inside html video tag. SRC= is given for the google drive video.
see my code below
<video auto play muted loop id="video" class="" src="https://drive.google.com/uc?export=download&id=1qSC6ySf6ZZldFRpuBx9EXvDsD-mfve1Z" type="video/mp4"> </video>
Note: when I am directly calling mp4 video from my server than it will not pause but when the video is called from google drive it gets paused after few minutes. As I am not showing any controls. I need the video to be played continuously.
the issues I'm getting is
Codepen link: https://codepen.io/5salman/pen/bGRMyKj
There's nothing inherit in Google Drive (that I can think of) that would cancel autoplay. More likely you're hitting an error event. The video will be loaded via "206 Partial Content" byte-range requests. If it doesn't end up being locally cached then that could make it more susceptible to network issues than on a server that doesn't support byte-range requests. Also consider looking at the network Devtools for any network errors.
Mitigation options:
Reduce the size of your video! It's 14mb for 20 seconds of video. Use Handbrake or something to re-encode the video to a smaller bitrate/size. That will reduce network traffic and make it less likely to choke the viewing device.
Add an error event handler on the <video> tag - this can help you with diagnostics, and you can also trigger the video to (try) to play again.
function reloadVideo(vidElement, preserveTime = true) {
if (preserveTime) {
// wait to set the current time again until after playable
const position = vidElement.currentTime;
var cb = vidElement.addEventListener('canplay', () => {
if (!isNaN(position) && position < vidElement.duration) {
// I recommend seeking just a frame or so ahead, just in case there was a decode error on the video
vidElement.currentTime = position + 1/30;
}
vidElement.removeEventListener('canplay', cb);
});
}
const src = vidElement.currentSrc;
vidElement.src = "";
vidElement.src = src;
}
var vidElement = document.querySelector('#video');
// retry on error
var retryErrorCodes = [MediaError.MEDIA_ERR_NETWORK, MediaError.MEDIA_ERR_DECODE];
var SECONDS_BEFORE_RETRY = 5;
vidElement.addEventListener('error', evt => {
if (retryErrorCodes.includes(vidElement.error.code)) {
setTimeout(reloadVideo, SECONDS_BEFORE_RETRY * 1000, vidElement);
}
}
(Not Recommended) if you need it to keep running you can add a setInterval function that restarts the video if it's paused. Keep in mind that these "zombie" functions can get annoying if you don't manage them carefully.
// use setInterval to keep retrying the playback
var SECONDS_BETWEEN_PLAYING_CHECKS = 10;
var keepPlayingTimer = setInterval(function () {
var vidElement = document.querySelector('#video');
// make sure the element is still there
if (vidElement && typeof vidElement.play === 'function') {
// if it's not playing start it playing again
if (vidElement.paused) {
vidElement.play();
}
} else {
// don't call this function again if video element is gone
clearInterval(keepPlayingTimer);
}
}, SECONDS_BETWEEN_PLAYING_CHECKS * 1000);

Ways to capture incoming WebRTC video streams (client side)

I am currently looking to find a best way to store a incoming webrtc video streams. I am joining the videocall using webrtc (via chrome) and I would like to record every incoming video stream to from each participant to the browser.
The solutions I am researching are:
Intercept network packets coming to the browsers e.g. using Whireshark and then decode. Following this article: https://webrtchacks.com/video_replay/
Modifying a browser to store recording as a file e.g. by modifying Chromium itself
Any screen-recorders or using solutions like xvfb & ffmpeg is not an options due the resources constrains. Is there any other way that could let me capture packets or encoded video as a file? The solution must be working on Linux.
if the media stream is what you want a method is to override the browser's PeerConnection. Here is an example:
In an extension manifest add the following content script:
content_scripts": [
{
"matches": ["http://*/*", "https://*/*"],
"js": ["payload/inject.js"],
"all_frames": true,
"match_about_blank": true,
"run_at": "document_start"
}
]
inject.js
var inject = '('+function() {
//overide the browser's default RTCPeerConnection.
var origPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;
//make sure it is supported
if (origPeerConnection) {
//our own RTCPeerConnection
var newPeerConnection = function(config, constraints) {
console.log('PeerConnection created with config', config);
//proxy the orginal peer connection
var pc = new origPeerConnection(config, constraints);
//store the old addStream
var oldAddStream = pc.addStream;
//addStream is called when a local stream is added.
//arguments[0] is a local media stream
pc.addStream = function() {
console.log("our add stream called!")
//our mediaStream object
console.dir(arguments[0])
return oldAddStream.apply(this, arguments);
}
//ontrack is called when a remote track is added.
//the media stream(s) are located in event.streams
pc.ontrack = function(event) {
console.log("ontrack got a track")
console.dir(event);
}
window.ourPC = pc;
return pc;
};
['RTCPeerConnection', 'webkitRTCPeerConnection', 'mozRTCPeerConnection'].forEach(function(obj) {
// Override objects if they exist in the window object
if (window.hasOwnProperty(obj)) {
window[obj] = newPeerConnection;
// Copy the static methods
Object.keys(origPeerConnection).forEach(function(x){
window[obj][x] = origPeerConnection[x];
})
window[obj].prototype = origPeerConnection.prototype;
}
});
}
}+')();';
var script = document.createElement('script');
script.textContent = inject;
(document.head||document.documentElement).appendChild(script);
script.parentNode.removeChild(script);
I tested this with a voice call in google hangouts and saw that two mediaStreams where added via pc.addStream and one track was added via pc.ontrack. addStream would seem to be local media streams and the event object in ontrack is a RTCTrackEvent which has a streams object. which I assume are what you are looking for.
To access these streams from your extenion's content script you will need to create audio elements and set the "srcObject" property to the media stream: e.g.
pc.ontrack = function(event) {
//check if our element exists
var elm = document.getElementById("remoteStream");
if(elm == null) {
//create an audio element
elm = document.createElement("audio");
elm.id = "remoteStream";
}
//set the srcObject to our stream. not sure if you need to clone it
elm.srcObject = event.streams[0].clone();
//write the elment to the body
document.body.appendChild(elm);
//fire a custom event so our content script knows the stream is available.
// you could pass the id in the "detail" object. for example:
//CustomEvent("remoteStreamAdded", {"detail":{"id":"audio_element_id"}})
//then access if via e.detail.id in your event listener.
var e = CustomEvent("remoteStreamAdded");
window.dispatchEvent(e);
}
Then in your content script you can listen for that event/access the mediastream like so:
window.addEventListener("remoteStreamAdded", function(e) {
elm = document.getElementById("remoteStream");
var stream = elm.captureStream();
})
With the capture stream available to your content script you can do pretty much anything you want with it. For example, MediaRecorder works really well for recording the stream(s) or you could use something like peer.js or maybe binary.js to stream to another source.
I haven't tested this but it should also be possible to override the local streams. For example, in the inject.js you could establish some blank mediastream, override navigator.mediaDevices.getUserMedia and instead of returning the local mediastream return your own mediastream.
This method should work in firefox and maybe others as well assuming you use an extenion/app to load the inject.js script at the start of the document. It being loaded before any of the target's libs is key to making this work.
edited for more detail
edited for even more detail
Capturing packets will only give you the network packets which you would then need to turn into frames and put into a container. A server such as Janus can record videos.
Running headless chrome and using the javascript MediaRecorder API is another option but much more heavy on resources.

Empty microphone data from getUserMedia

Using the following code I get all zeroes in the audio stream from my microphone (using Chrome):
navigator.mediaDevices.getUserMedia({audio:true}).then(
function(stream) {
var audioContext = new AudioContext();
var source = audioContext.createMediaStreamSource(stream);
var node = audioContext.createScriptProcessor(8192, 1, 1);
source.connect(node);
node.connect(audioContext.destination);
node.onaudioprocess = function (e) {
console.log("Audio:", e.inputBuffer.getChannelData(0));
};
}).catch(function(error) {console.error(error);})
I created a jsfiddle here: https://jsfiddle.net/g3dck4dr/
What's wrong here?
Umm, something in your hardware config is wrong? The fiddle works fine for me (that is, it shows non-zero values). Do other web audio input tests work, like https://webaudiodemos.appspot.com/input/index.html?
Test to make sure you've selected the right input, and you don't have a hardware mute switch on.

getUserMedia() Screen share with Audio and update tray

When I use getUserMedia() for screen share, I don't get audio.
Things which I would like to do, but couldn't find any relevant stuff:
I want to capture both the screen and audio at the same time. How can I achieve this ?
When my screen share starts, the below tray appears. What it is called ? and how can I modify it (like its looks) ?
Screenshot:
if you want one stream made of your screensharing for the video track and your webcam/mike audio for the audio track, you will need to make 2 calls to getusermedia with constraints set to screen and audio, respectively. then you will have to put the tracks in a common stream. Eventually, you can attach that stream to a peer connection.
as peveuve said, you can also use two peer connections, but it comes with at least two problems:
you will not have synchronization between audio and video (not so important for screensahring)
you will need two connection => twice the number of ports => more chance to fail. That is more likely to be a problem.
this is a mandatory security feature from the browser (to prevent a rogue page to broadcast your screen without you knowing it). I do not know of a way to manipulate it at all
its possible with npm-msr on Chrome.
getScreenId(function (error, sourceId, screen_constraints) {
navigator.getUserMedia = navigator.mozGetUserMedia || navigator.webkitGetUserMedia;
navigator.getUserMedia(screen_constraints, function (stream) {
navigator.getUserMedia({audio: true}, function (audioStream) {
stream.addTrack(audioStream.getAudioTracks()[0]);
var mediaRecorder = new MediaStreamRecorder(stream);
mediaRecorder.mimeType = 'video/mp4'
mediaRecorder.stream = stream;
document.querySelector('video').src = URL.createObjectURL(stream);
var video = document.getElementById('screen-video')
if (video) {
video.src = URL.createObjectURL(stream);
video.width = 360;
video.height = 300;
}
}, function (error) {
alert(error);
});
}, function (error) {
alert(error);
});
});
Check this answer: Is it possible broadcast audio with screensharing with WebRTC
You can't share both screen and audio in the same peer, you have to open 2 peers.

Distorted sounds with Dartium and Web Audio API in Dart

I'm pretty new to the HTML5 audio api: I've read some of the related articles at HTML5 Rocks, but it can be a little tricky flipping between Javascript and Dart at times.
In any case, I've been experimenting with HTML5 Audio in Dart. To produce sound effects for a simple game, I created a class as follows. I created an AudioContext, loaded sound data into SoundBuffers, and when the sound needed to be played, created an AudioBufferSourceNode via which to play the data stored in the buffers:
class Sfx {
AudioContext audioContext;
List<Map> soundList;
int soundFiles;
Sfx() {
audioContext = new AudioContext();
soundList = new List<Map>();
var soundsToLoad = [
{"name": "MISSILE", "url": "SFX/missile.wav"},
{"name": "EXPLOSION", "url": "SFX/explosion.wav"}
];
soundFiles = soundsToLoad.length;
for (Map sound in soundsToLoad) {
initSound(sound);
}
}
bool allSoundsLoaded() => (soundFiles == 0);
void initSound(Map soundMap) {
HttpRequest req = new HttpRequest();
req.open('GET', soundMap["url"], true);
req.responseType = 'arraybuffer';
req.on.load.add((Event e) {
audioContext.decodeAudioData(
req.response,
(var buffer) {
// successful decode
print("...${soundMap["name"]} loaded...");
soundList.add({"name": soundMap["name"], "buffer": buffer});
soundFiles--;
},
(var error) {
print("error loading ${soundMap["name"]}");
}
);
});
req.send();
}
void sfx(AudioBuffer buffer) {
AudioBufferSourceNode source = audioContext.createBufferSource();
source.connect(audioContext.destination, 0, 0);
source.buffer = buffer;
source.start(0);
}
void playSound(String sound) {
for (Map m in soundList) {
print(m);
if (m["name"] == sound) {
sfx(m["buffer"]);
break;
}
}
}
}
(The sound effects are in a folder "SFX". Now that I look at the code, there are probably a million better ways to organise the data, but that's besides the point right now.) I am able to play sound effects by creating an instance of Sfx and calling the method playSound.
e.g.
#import('dart:html');
#source('sfx.dart');
Sfx sfx;
void main() {
sfx = new Sfx();
window.on.keyUp.add((KeyboardEvent keX) {
sfx.playSound("MISSILE");
});
}
(Edit: added code to play sound when a key is hit.)
The problem is: although with the dart2js Javascript, the sound effects play as expected in Safari, when they are played in Dartium or (with the dart2js Javascript) in Chrome, they are distorted. (In Firefox, there are even worse problems!)
Is there anything obvious that I have neglected to do or that I need to take into account? Otherwise, are there any references or tutorials, preferably in a Dart context, that might help?
thanks for trying Dart!
First off, Firefox doesn't support Web Audio API (yet?) Chrome and Safari support Web Audio API. You can track adoption of Web Audio API here: http://caniuse.com/#feat=audio-api
Second, please try this Web Audio API sample in Dartium: https://github.com/dart-lang/dart-html5-samples/tree/master/web/webaudio/intro You will need to clone the repo first and run it locally. This sample works for me locally.
This sounds more like a bug report. If the sample from dart-html5-samples works for you, but your above code continues to be distorted, please open a bug at http://dartbug.com/new so we can take a look.
One thing to consider is waiting until the specific MISSLE sound is loaded before hooking up the keyUp handler.