Vimeo force CC language - vimeo

Trying to embed a Vimeo video into my website and I have put about 5 different languages into the CC of the video on Vimeo. However I don't want the user to have to change their language in the CC drop down in the Vimeo embed, I would like to assign it in HTML/JavaScript (using geolocation to select their base language) then they can change their CC language accordingly once the video has started playing.

You can use the enableTextTrack function on a player initialized by the JS API provided by Vimeo:
// Select with the DOM API
var iframe = document.querySelector('iframe');
var iframePlayer = new Vimeo.Player(iframe);
player.enableTextTrack('en').then(function(track) {
// track.language = the iso code for the language
// track.kind = 'captions' or 'subtitles'
// track.label = the human-readable label
}).catch(function(error) {
switch (error.name) {
case 'InvalidTrackLanguageError':
// no track was available with the specified language
break;
case 'InvalidTrackError':
// no track was available with the specified language and kind
break;
default:
// some other error occurred
break;
}
});
More information on the github of Vimeo player JS API: https://github.com/vimeo/player.js#enabletexttracklanguage-string-kind-string-promiseobject-invalidtracklanguageerrorinvalidtrackerrorerror

We don't have this yet, but we do plan on offering some way to do it with an embed parameter and through the JavaScript API in the future.

Related

Ways to capture incoming WebRTC video streams (client side)

I am currently looking to find a best way to store a incoming webrtc video streams. I am joining the videocall using webrtc (via chrome) and I would like to record every incoming video stream to from each participant to the browser.
The solutions I am researching are:
Intercept network packets coming to the browsers e.g. using Whireshark and then decode. Following this article: https://webrtchacks.com/video_replay/
Modifying a browser to store recording as a file e.g. by modifying Chromium itself
Any screen-recorders or using solutions like xvfb & ffmpeg is not an options due the resources constrains. Is there any other way that could let me capture packets or encoded video as a file? The solution must be working on Linux.
if the media stream is what you want a method is to override the browser's PeerConnection. Here is an example:
In an extension manifest add the following content script:
content_scripts": [
{
"matches": ["http://*/*", "https://*/*"],
"js": ["payload/inject.js"],
"all_frames": true,
"match_about_blank": true,
"run_at": "document_start"
}
]
inject.js
var inject = '('+function() {
//overide the browser's default RTCPeerConnection.
var origPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;
//make sure it is supported
if (origPeerConnection) {
//our own RTCPeerConnection
var newPeerConnection = function(config, constraints) {
console.log('PeerConnection created with config', config);
//proxy the orginal peer connection
var pc = new origPeerConnection(config, constraints);
//store the old addStream
var oldAddStream = pc.addStream;
//addStream is called when a local stream is added.
//arguments[0] is a local media stream
pc.addStream = function() {
console.log("our add stream called!")
//our mediaStream object
console.dir(arguments[0])
return oldAddStream.apply(this, arguments);
}
//ontrack is called when a remote track is added.
//the media stream(s) are located in event.streams
pc.ontrack = function(event) {
console.log("ontrack got a track")
console.dir(event);
}
window.ourPC = pc;
return pc;
};
['RTCPeerConnection', 'webkitRTCPeerConnection', 'mozRTCPeerConnection'].forEach(function(obj) {
// Override objects if they exist in the window object
if (window.hasOwnProperty(obj)) {
window[obj] = newPeerConnection;
// Copy the static methods
Object.keys(origPeerConnection).forEach(function(x){
window[obj][x] = origPeerConnection[x];
})
window[obj].prototype = origPeerConnection.prototype;
}
});
}
}+')();';
var script = document.createElement('script');
script.textContent = inject;
(document.head||document.documentElement).appendChild(script);
script.parentNode.removeChild(script);
I tested this with a voice call in google hangouts and saw that two mediaStreams where added via pc.addStream and one track was added via pc.ontrack. addStream would seem to be local media streams and the event object in ontrack is a RTCTrackEvent which has a streams object. which I assume are what you are looking for.
To access these streams from your extenion's content script you will need to create audio elements and set the "srcObject" property to the media stream: e.g.
pc.ontrack = function(event) {
//check if our element exists
var elm = document.getElementById("remoteStream");
if(elm == null) {
//create an audio element
elm = document.createElement("audio");
elm.id = "remoteStream";
}
//set the srcObject to our stream. not sure if you need to clone it
elm.srcObject = event.streams[0].clone();
//write the elment to the body
document.body.appendChild(elm);
//fire a custom event so our content script knows the stream is available.
// you could pass the id in the "detail" object. for example:
//CustomEvent("remoteStreamAdded", {"detail":{"id":"audio_element_id"}})
//then access if via e.detail.id in your event listener.
var e = CustomEvent("remoteStreamAdded");
window.dispatchEvent(e);
}
Then in your content script you can listen for that event/access the mediastream like so:
window.addEventListener("remoteStreamAdded", function(e) {
elm = document.getElementById("remoteStream");
var stream = elm.captureStream();
})
With the capture stream available to your content script you can do pretty much anything you want with it. For example, MediaRecorder works really well for recording the stream(s) or you could use something like peer.js or maybe binary.js to stream to another source.
I haven't tested this but it should also be possible to override the local streams. For example, in the inject.js you could establish some blank mediastream, override navigator.mediaDevices.getUserMedia and instead of returning the local mediastream return your own mediastream.
This method should work in firefox and maybe others as well assuming you use an extenion/app to load the inject.js script at the start of the document. It being loaded before any of the target's libs is key to making this work.
edited for more detail
edited for even more detail
Capturing packets will only give you the network packets which you would then need to turn into frames and put into a container. A server such as Janus can record videos.
Running headless chrome and using the javascript MediaRecorder API is another option but much more heavy on resources.

Chromecast subtitles on default receiver applications

I am trying to include subtitles on a Chromecast application I'm building.
I am using the default receiver application.
I am writing a chrome sender application using v1 of the chrome sender api.
According to the Chromecast Sender Api documentation, I should be passing in an array of track objects into the chrome.cast.media.MediaInfo object. My issue is, whenever I call chrome.cast.media.Track(trackId, trackType), it returns undefined. When I look through the public methods in chrome.cast.media, through console, I don't see anything related to Track. Link to documentation here.
Below is my loadMedia method where I try to include an array of track objects along with my LoadRequest as specified by the cast api. The commented out code is how I've seen closed-captioning handled in one of the cast Github repositories, but unfortunately I believe you have to handle that customData in your own custom receiver application.
Are subtitles through the chrome sender SDK possible yet, or does one have to build their own receiver application and specifically handle text tracking through passed in customData? Am I potentially using the wrong sender api?
function loadMedia() {
mediaUrl = decodeURIComponent(_player.sources.mp4);
var mediaInfo = new chrome.cast.media.MediaInfo(mediaUrl);
mediaInfo.contentType = 'video/mp4';
var track1 = new chrome.cast.media.Track(1, chrome.cast.media.TrackType.TEXT);
track1.trackContentId = "https://dl.dropboxusercontent.com/u/35106650/test.vtt";
mediaInfo.tracks = [track1];
var request = new chrome.cast.media.LoadRequest(mediaInfo);
// var json = {
// cc: {
// tracks: [{
// src: "https://dl.dropboxusercontent.com/u/35106650/test.vtt"
// }],
// active: 0
// }
// };
// request.customData = json;
session.loadMedia(request, onMediaDiscovered.bind(this, 'loadMedia'), onMediaError);
}
Currently, neither the Default nor the Styled Receivers support Closed Caption; you need to create your own. We have a sample in our GitHub repo that can be used for doing exactly that.
Update: Styled and Default receivers now support Tracks, see our documentations.

In search for a library that knows to detect support for specific Audio Video format files

I`m building a mobile-web app, and I am implementing Video & Audio tags. Apparently not all devices know to run all file formats. Modernizr knows to return to me the Codec, but how can I know if my file has this specific codec?
I can identify the file extension or mim-type, but than I`ll need to compare the codec with the file extension, and maintain an array with this data, and this seems like a sisyphic task.
What do you guys say? Any one knows on a good library that knows provide us with this information? Or maybe my approach is wrong and theres a better way to achive that.
Please not that Firefox OS for example display a msg to the user that specific file format isn't supported by the OS, but he doesn`t provide this msg when dealing with Audio files. Which means that I need to provide this feedback to the user, and I also prefer to do it in a customised visual way to my application.
An example:
I got video.mp4. Modernizr returns Codec of H264. this two pieces of information don't relate.
I cannot place fallbacks for my video to run in all video formats available. The app is a browser of a Cloud Files Hosting (like dropbox) and if the user uploaded a file which cannot run on FirefoxOS, than he must see understand why its file don`t run, and for that I need this library, instead of managing this compression by myself.
Some references:
Mozile - Supported media formats
Dive Into HTML5 - detect video formats
Thank you.
If you have access to the mime type you could simply use the audio or video's canPlayType() method:
canPlay = (function(){
var a = document.createElement('audio'),
v = document.createElement('video');
function canPlayAudio(type){
var mime = 'audio/';
if (a && a.canPlayType) {
if (type.indexOf(mime) === -1) type = mime+type;
return !!a.canPlayType(type) || !!a.canPlayType(type.replace(/^audio\/(x-)?(.+)$/, mime+'x-$2'))
} return false
}
function canPlayVideo(type){
var mime = 'video/';
if (v && v.canPlayType) {
if (type.indexOf(mime) === -1) type = mime+type;
return !!v.canPlayType(type) || !!v.canPlayType(type.replace(/^video\/(x-)?(.+)$/, mime+'x-$2'))
} return false
}
return {
audio: canPlayAudio,
video: canPlayVideo
}
})();
Then you can perform the tests like this (note: including the "audio/" or "video/" part is optional):
canPlay.audio('audio/mpeg') // true
canPlay.audio('m4a') // true
canPlay.audio('wav') // true
canPlay.audio('flac') // false
canPlay.audio('audio/ogg') // true
canPlay.video('video/mpeg') // false
canPlay.video('video/mp4') // true
canPlay.video('m4v') // true
canPlay.video('video/webm') // true
canPlay.video('avi') // false

How to browse mobile directory in flex?

I have captured 3 videos on my mobile which is by default stored on the phone gallery (Gallery/videos/). I have to play these 3 videos in one of my flex mobile application. How can I get the videos to the flex project? if I need to browse the mobile directory means kindly help me with some code to do so.
I too am looking for an answer to this question. Right now, based on other Stackoverflow discussions, exhaustive perusal of tutorials and Adobe documentation, and comments to both (often the more useful resource), I'm coming to the conclusion that it's not possible.
you can use CameraRoll.browseForImage() and open the iOS gallery of photos to see all entities of MediaType.IMAGE, but it will not show you MediaType.VIDEO
you can use CameraUI to launch the system camera by delegation and that returns a MediaPromise, but as far as I can tell, it does not save the video you capture anywhere, and I cannot find a way to access the captured video using the MediaPromise (at least using the Loader class)
Here's my code as a hint in that direction. The second code block is using the CameraRoll to browseForImage() but there is no browseForVideo() in the API.
if(CameraUI.isSupported)
{
camera = new CameraUI();
camera.addEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.addEventListener(Event.CANCEL, cameraCanceled);
camera.addEventListener(ErrorEvent.ERROR, cameraError);
camera.launch(MediaType.VIDEO);
}
else
{
statusText.text = "Camera not supported on this device.";
startTimer();
}
if (CameraRoll.supportsBrowseForImage)
{
roll = new CameraRoll();
roll.addEventListener(MediaEvent.SELECT, cameraRollEventComplete);
roll.addEventListener(Event.CANCEL, cameraCanceled);
roll.addEventListener(ErrorEvent.ERROR, cameraError);
roll.browseForImage();
}
else
{
statusText.text = "Camera roll not supported on this device.";
startTimer();
}
I've since found that Videos captured using the delegated system camera are stored in a temporary storage location that iOS -DOES!- allow access to. (I was pleasantly shocked.)
The Captured video is not added to the device's Camera Roll as other videos captured using the iOS System Camera app, so it's not enough to capture video and expect to be able to access it later (if, for instance, CameraRoll.browseForVideo() is ever added to the API.
Therefore, you have to 'get while the getting is good' and move the file from the temporary storage location to some non-volatile location such as ApplicationStorageDirectory or the user's Documents directory (The only options in iOS I think).
The MediaPromise... I think... is completely useless for accessing the video via any direct progressive loader/streamer method, but still provides the location/url/path/filename of the temporary file so you can perform File operations on it.
Ironic that there are tutorials for getting around the lack of a file location/url/path/filename in the MediaPromise when using CameraRoll.browseForImage()... and that method is to use a loader class to load the image content (which you can then write out to a file), but when taking video, the video content is not accessible, and instead a file location/url/path/filename is provided. Ironic that there are nearly no resources I was able to find to help with this also. grumble
I'm going to include some code chunks w/o really editing them to strip out extraneous bits because it's way past when I need to be in bed, but I wanted you to have this. I may come clean it up later.
This section is in a Spark SkinnablePopUpContainer and I use the same click event for several buttons, thus the below 'case' is in the switch-case in that event handler function.
In case you are not familiar, the 'close(true, data)' is the method to close the SkinnablePopUpContainer, tell the parent/owner that the container was closed purposefully and that it should look for the data object being shared back (i.e., there are changes to be 'commit'ed).
case "cameraVideo":
{
if(CameraUI.isSupported)
{
camera = new CameraUI();
camera.addEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.addEventListener(Event.CANCEL, cameraCanceled);
camera.addEventListener(ErrorEvent.ERROR, cameraError);
camera.launch(MediaType.VIDEO);
}
else
{
statusText.text = "Camera not supported on this device.";
startTimer();
}
break;
}
protected function cameraCanceled(event:Event):void
{
statusText.text = "Camera access canceled by user.";
startTimer();
}
protected function cameraError(event:ErrorEvent):void
{
statusText.text = "There was an error while trying to use the camera.";
startTimer();
}
protected function videoMediaEventComplete(event:MediaEvent):void
{
statusText.text="Preparing captured video...";
camera.removeEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.removeEventListener(Event.CANCEL, cameraCanceled);
camera.removeEventListener(ErrorEvent.ERROR, cameraError);
var media:MediaPromise = event.data;
data.MediaType = MediaType.VIDEO;
data.MediaPromise = media;
data.source = "camera video";
close(true,data)
}
This section is the Actionscript in the close handler of the parent/owner of the SkinnablePopUpContainer (truncated once the useful code is included)
private function choosePictureLightboxClosed(event:PopUpEvent):void
{
imageButtonsActive = false;
if(event.commit)
{
this.data = event.data as Object;
filters = new Array();
selection = true;
switch(data.MediaType)
{
case MediaType.VIDEO:
{
mediaType = "video";
trace(data.MediaPromise.file.url + " - " + data.MediaPromise.relativePath + " - " +data.MediaPromise.mediaType);
var sourceFile:File = new File(data.MediaPromise.file.url);
var destinationFile:File = File.applicationStorageDirectory.resolvePath("User" +parentApplication.userid);
if(destinationFile.exists && !destinationFile.isDirectory)
{
destinationFile.deleteFile();
}
destinationFile.createDirectory();
destinationFile = destinationFile.resolvePath("Videos");
if(destinationFile.exists && !destinationFile.isDirectory)
{
destinationFile.deleteFile();
}
destinationFile.createDirectory();
destinationFile = destinationFile.resolvePath(parentApplication.userid+"Video"+new Date().getTime()+".mov");
trace(destinationFile.nativePath);
sourceFile.moveTo(destinationFile,true);
break;
}
I sure do hope this helps. This has been a very frustrating (and costly in terms of our project being government grant funded and having deadlines we utterly failed to meet), and I very much hope that these hard-won solutions might help others avoid the same experience.

Web Audio API: How to load another audio file?

I want to write a basic script for HTML5 Web Audio API, can play some audio files. But I don't know how to unload a playing audio and load another one. In my script two audio files are playing in the same time,but not what I wanted.
Here is my code:
var context,
soundSource,
soundBuffer;
// Step 1 - Initialise the Audio Context
context = new webkitAudioContext();
// Step 2: Load our Sound using XHR
function playSound(url) {
// Note: this loads asynchronously
var request = new XMLHttpRequest();
request.open("GET", url, true);
request.responseType = "arraybuffer";
// Our asynchronous callback
request.onload = function() {
var audioData = request.response;
audioGraph(audioData);
};
request.send();
}
// This is the code we are interested in
function audioGraph(audioData) {
// create a sound source
soundSource = context.createBufferSource();
// The Audio Context handles creating source buffers from raw binary
soundBuffer = context.createBuffer(audioData, true/* make mono */);
// Add the buffered data to our object
soundSource.buffer = soundBuffer;
// Plug the cable from one thing to the other
soundSource.connect(context.destination);
// Finally
soundSource.noteOn(context.currentTime);
}
// Stop all of sounds
function stopSounds(){
// How can do this?
}
// Events for audio buttons
document.querySelector('.pre').addEventListener('click',
function () {
stopSounds();
playSound('http://thelab.thingsinjars.com/web-audio-tutorial/hello.mp3');
}
);
document.querySelector('.next').addEventListener('click',
function() {
stopSounds();
playSound('http://thelab.thingsinjars.com/web-audio-tutorial/nokia.mp3');
}
);
You should be pre-loading sounds into buffers once, at launch, and simply resetting the AudioBufferSourceNode whenever you want to play it back.
To play multiple sounds in sequence, you need to schedule them using noteOn(time), one after the other, based on buffer respective lengths.
To stop sounds, use noteOff.
Sounds like you are missing some fundamental web audio concepts. This (and more) is described in detail and shown with samples in this HTML5Rocks tutorial and the FAQ.