Chromecast subtitles on default receiver applications - google-chrome

I am trying to include subtitles on a Chromecast application I'm building.
I am using the default receiver application.
I am writing a chrome sender application using v1 of the chrome sender api.
According to the Chromecast Sender Api documentation, I should be passing in an array of track objects into the chrome.cast.media.MediaInfo object. My issue is, whenever I call chrome.cast.media.Track(trackId, trackType), it returns undefined. When I look through the public methods in chrome.cast.media, through console, I don't see anything related to Track. Link to documentation here.
Below is my loadMedia method where I try to include an array of track objects along with my LoadRequest as specified by the cast api. The commented out code is how I've seen closed-captioning handled in one of the cast Github repositories, but unfortunately I believe you have to handle that customData in your own custom receiver application.
Are subtitles through the chrome sender SDK possible yet, or does one have to build their own receiver application and specifically handle text tracking through passed in customData? Am I potentially using the wrong sender api?
function loadMedia() {
mediaUrl = decodeURIComponent(_player.sources.mp4);
var mediaInfo = new chrome.cast.media.MediaInfo(mediaUrl);
mediaInfo.contentType = 'video/mp4';
var track1 = new chrome.cast.media.Track(1, chrome.cast.media.TrackType.TEXT);
track1.trackContentId = "https://dl.dropboxusercontent.com/u/35106650/test.vtt";
mediaInfo.tracks = [track1];
var request = new chrome.cast.media.LoadRequest(mediaInfo);
// var json = {
// cc: {
// tracks: [{
// src: "https://dl.dropboxusercontent.com/u/35106650/test.vtt"
// }],
// active: 0
// }
// };
// request.customData = json;
session.loadMedia(request, onMediaDiscovered.bind(this, 'loadMedia'), onMediaError);
}

Currently, neither the Default nor the Styled Receivers support Closed Caption; you need to create your own. We have a sample in our GitHub repo that can be used for doing exactly that.
Update: Styled and Default receivers now support Tracks, see our documentations.

Related

Can't import geojson value as string in google maps with firebase web

So, I set up my firebase to communicate with my web app which uses google maps api and my goal is this: When a user draws a shape on the map(polygon, linestring), I want to send the geoJson value of it to the firebase(currently sending it as a String), and then retrieve it back so it appears on the map for everyone(since it's getting synced from the firebase database). My problem is that when I try to retrieve the geoJson data back and add it on google maps, at the line map.data.addGeoJson(geoJsonString);(geoJsonString = geoJson value that is stored in firebase) I get an error saying:
Uncaught Jb {message: "not a Feature or FeatureCollection", name: "InvalidValueError", stack: "Error↵ at new Jb (https://maps.googleapis.com/m…tatic.com/firebasejs/4.13.0/firebase.js:1:278304)"}
For some reason google maps api doesnt accept the geoJson value even though console.log(geoJsonString); returns a valid geoJson value (checked at http://geojsonlint.com/)
Now the strange part is that if I try to import the same geoJson value manually(storing the geoJson value in a var and then map.data.addGeoJson(geoJsonString);) it works just fine.
This function syncs firebase with the web app
function gotData(data){
paths = data.val();
if(paths == null){
console.log("firebase null");
alert('Database is empty! Try adding some paths.');
}
else{
var keys = Object.keys(paths);
for(var i = 0; i < keys.length; i++){
var k = keys[i];
var geoJsonString = paths[k].geoJsonString;
console.log(geoJsonString);
map.data.addGeoJson(geoJsonString);
}
}
}
This function updates and pushes data in firebase
function updateData(){
data = {
geoJsonString: geoJsonOutput.value
}
ref = database.ref('firebasePaths');
ref.push(data);
}
In this function(which is used to store geoJson values locally in a file), I call updateData function), after a new path is drawn on the map
// Refresh different components from other components.
function refreshGeoJsonFromData() {
map.data.toGeoJson(function(geoJson) {
geoJsonOutput.value = JSON.stringify(geoJson);
updateData();
refreshDownloadLinkFromGeoJson();
});
}
Example of my firebase that contains 2 random geoJson
I can't trace where the problem is. Any ideas?
Update: I managed to fix this issue by parsing the string with JSON.parse("retrieved string from firebase"), saving it to a variable and then adding it to the map with map.data.addgeoJson(parsed variable).
We still have not faced that issue, however, we are aware of it.
Our intended solution is to use GeoFire: An open-source library for the Firebase Realtime Database that adds support for geospatial querying.
You can find the library description in here:
https://firebase.google.com/docs/libraries/
For the Web supported library:
https://github.com/firebase/geofire-js

Dart / flutter: download or read the contents of a Google Drive file

I have a public (anyone with the link can view) file on my Google Drive and I want to use the content of it in my Android app.
From what I could gather so far, I need the fileID, the OAuth token and the client ID - these I already got. But I can't figure out what is the exact methodology of authorising the app or fetching the file.
EDIT:
Simply reading it using file.readAsLines didn't work:
final file = new File(dogListTxt);
Future<List<String>> dogLinks = file.readAsLines();
return dogLinks;
The dogLinks variable isn't filled with any data, but I get no error messages.
The other method I tried was following this example but this is a web based application with explicit authorization request (and for some reason I was never able to import the dart:html library).
The best solution would be if it could be done seamlessly, as I would store the content in a List at the application launch, and re-read on manual refresh button press.
I found several old solutions here, but the methods described in those doesn't seem to work anymore (from 4-5 years ago).
Is there a good step-by-step tutorial about integrating the Drive API in a flutter application written in dart?
I had quite a bit of trouble with this, it seems much harder than it should be. Also this is for TXT files only. You need to use files.export() for other files.
First you need to get a list fo files.
ga.FileList textFileList = await drive.files.list(q: "'root' in parents");
Then you need to get those files based on ID (This is for TXT Files)
ga.Media response = await drive.files.get(filedId, downloadOptions: ga.DownloadOptions.FullMedia);
Next is the messy part, you need to convert your Media object stream into a File and then read the text from it. ( #Google, please make this easier.)
List<int> dataStore = [];
response.stream.listen((data) {
print("DataReceived: ${data.length}");
dataStore.insertAll(dataStore.length, data);
}, onDone: () async {
Directory tempDir = await getTemporaryDirectory(); //Get temp folder using Path Provider
String tempPath = tempDir.path; //Get path to that location
File file = File('$tempPath/test'); //Create a dummy file
await file.writeAsBytes(dataStore); //Write to that file from the datastore you created from the Media stream
String content = file.readAsStringSync(); // Read String from the file
print(content); //Finally you have your text
print("Task Done");
}, onError: (error) {
print("Some Error");
});
There currently is no good step-by-step tutorial, but using https://developers.google.com/drive/api/v3/manage-downloads as a reference guide for what methods to use in Dart/Flutter via https://pub.dev/packages/googleapis: to download or read the contents of a Google Drive file, you should be using googleapis/Drive v3, or specifically, the methods from the FilesResourceApi class.
drive.files.export(), if this is a Google document
/// Exports a Google Doc to the requested MIME type and returns the exported content. Please note that the exported content is limited to 10MB.
drive.files.get(), if this something else, a non-Gdoc file
/// Gets a file's metadata or content by ID.
Simplified example:
var drive = new DriveApi(http_client);
drive.files.get(fileId).then((file) {
// returns file
});
However, what I discovered was that this Dart-GoogleAPIs library seemed to be missing a method equivalent to executeMediaAndDownloadTo(outputStream). In the original Google Drive API v3, this method adds the alt=media URL parameter to the underlying HTTP request. Otherwise, you'll get the error, which is what I saw:
403, message: Export requires alt=media to download the exported
content.
And I wasn't able to find another way to insert that URL parameter into the current request (maybe someone else knows?). So as an alternative, you'll have to resort to implementing your own Dart API to do the same thing, as hinted by what this OP did over here https://github.com/dart-lang/googleapis/issues/78: CustomDriveApi
So you'll either:
do it through Dart with your own HttpClient implementation and try to closely follow the REST flow from Dart-GoogleAPIs, but remembering to include the alt=media
or implement and integrate your own native-Android/iOS code and use the original SDK's convenient executeMediaAndDownloadTo(outputStream)
(note, I didn't test googleapis/Drive v2, but a quick examination of the same methods looks like they are missing the same thing)
I wrote this function to get file content of a file using its file id. This is the simplest method I found to do it.
Future<String> _getFileContent(String fileId) async {
var response = await driveApi.files.get(fileId, downloadOptions: DownloadOptions.fullMedia);
if (response is! Media) throw Exception("invalid response");
return await utf8.decodeStream(response.stream);
}
Example usage:
// save file to app data folder with 150 "hello world"s
var content = utf8.encode("hello world" * 150);
driveApi.files
.create(File(name: fileName, parents: [appDataFolder]),
uploadMedia: Media(Stream.value(content), content.length))
.then((value) {
Log().i("finished uploading file ${value.id}");
var id = value.id;
if (id != null) {
// after successful upload, read the recently uploaded file content
_getFileContent(id).then((value) => Log().i("got content is $value"));
}
});

Ways to capture incoming WebRTC video streams (client side)

I am currently looking to find a best way to store a incoming webrtc video streams. I am joining the videocall using webrtc (via chrome) and I would like to record every incoming video stream to from each participant to the browser.
The solutions I am researching are:
Intercept network packets coming to the browsers e.g. using Whireshark and then decode. Following this article: https://webrtchacks.com/video_replay/
Modifying a browser to store recording as a file e.g. by modifying Chromium itself
Any screen-recorders or using solutions like xvfb & ffmpeg is not an options due the resources constrains. Is there any other way that could let me capture packets or encoded video as a file? The solution must be working on Linux.
if the media stream is what you want a method is to override the browser's PeerConnection. Here is an example:
In an extension manifest add the following content script:
content_scripts": [
{
"matches": ["http://*/*", "https://*/*"],
"js": ["payload/inject.js"],
"all_frames": true,
"match_about_blank": true,
"run_at": "document_start"
}
]
inject.js
var inject = '('+function() {
//overide the browser's default RTCPeerConnection.
var origPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;
//make sure it is supported
if (origPeerConnection) {
//our own RTCPeerConnection
var newPeerConnection = function(config, constraints) {
console.log('PeerConnection created with config', config);
//proxy the orginal peer connection
var pc = new origPeerConnection(config, constraints);
//store the old addStream
var oldAddStream = pc.addStream;
//addStream is called when a local stream is added.
//arguments[0] is a local media stream
pc.addStream = function() {
console.log("our add stream called!")
//our mediaStream object
console.dir(arguments[0])
return oldAddStream.apply(this, arguments);
}
//ontrack is called when a remote track is added.
//the media stream(s) are located in event.streams
pc.ontrack = function(event) {
console.log("ontrack got a track")
console.dir(event);
}
window.ourPC = pc;
return pc;
};
['RTCPeerConnection', 'webkitRTCPeerConnection', 'mozRTCPeerConnection'].forEach(function(obj) {
// Override objects if they exist in the window object
if (window.hasOwnProperty(obj)) {
window[obj] = newPeerConnection;
// Copy the static methods
Object.keys(origPeerConnection).forEach(function(x){
window[obj][x] = origPeerConnection[x];
})
window[obj].prototype = origPeerConnection.prototype;
}
});
}
}+')();';
var script = document.createElement('script');
script.textContent = inject;
(document.head||document.documentElement).appendChild(script);
script.parentNode.removeChild(script);
I tested this with a voice call in google hangouts and saw that two mediaStreams where added via pc.addStream and one track was added via pc.ontrack. addStream would seem to be local media streams and the event object in ontrack is a RTCTrackEvent which has a streams object. which I assume are what you are looking for.
To access these streams from your extenion's content script you will need to create audio elements and set the "srcObject" property to the media stream: e.g.
pc.ontrack = function(event) {
//check if our element exists
var elm = document.getElementById("remoteStream");
if(elm == null) {
//create an audio element
elm = document.createElement("audio");
elm.id = "remoteStream";
}
//set the srcObject to our stream. not sure if you need to clone it
elm.srcObject = event.streams[0].clone();
//write the elment to the body
document.body.appendChild(elm);
//fire a custom event so our content script knows the stream is available.
// you could pass the id in the "detail" object. for example:
//CustomEvent("remoteStreamAdded", {"detail":{"id":"audio_element_id"}})
//then access if via e.detail.id in your event listener.
var e = CustomEvent("remoteStreamAdded");
window.dispatchEvent(e);
}
Then in your content script you can listen for that event/access the mediastream like so:
window.addEventListener("remoteStreamAdded", function(e) {
elm = document.getElementById("remoteStream");
var stream = elm.captureStream();
})
With the capture stream available to your content script you can do pretty much anything you want with it. For example, MediaRecorder works really well for recording the stream(s) or you could use something like peer.js or maybe binary.js to stream to another source.
I haven't tested this but it should also be possible to override the local streams. For example, in the inject.js you could establish some blank mediastream, override navigator.mediaDevices.getUserMedia and instead of returning the local mediastream return your own mediastream.
This method should work in firefox and maybe others as well assuming you use an extenion/app to load the inject.js script at the start of the document. It being loaded before any of the target's libs is key to making this work.
edited for more detail
edited for even more detail
Capturing packets will only give you the network packets which you would then need to turn into frames and put into a container. A server such as Janus can record videos.
Running headless chrome and using the javascript MediaRecorder API is another option but much more heavy on resources.

webrtc: failed to send arraybuffer over data channel in chrome

I want to send streaming data (as sequences of ArrayBuffer) from a Chrome extension to a Chrome App, since Chrome message API (includes chrome.runtime.sendMessage, postMessage...) does not support ArrayBuffer and JS arrays have poor performance, I have to try other methods. Eventually, I found WebRTC over RTCDataChannel might a good solution in my case.
I have succeeded to send string over a RTCDataChannel, but when I tried to send ArrayBuffer I got:
code: 19
message: "Failed to execute 'send' on 'RTCDataChannel': Could not send data"
name: "NetworkError"
It seems that it's not a bandwidths limits problem since it failed even though I sent one byte of data. Here is my code:
pc = new RTCPeerConnection(configuration, { optional: [ { RtpDataChannels: true } ]});
//...
var dataChannel = m.pc.createDataChannel("mydata", {reliable: true});
//...
var ab = new ArrayBuffer(8);
dataChannel.send(ab);
Tested on OSX 10.10.1, Chrome M40 (Stnble), M42(Canary); and on Chromebook M40.
I have filed a bug for WebRTC here.
I modified my code, now everything worked amazing:
removed RtpDataChannels option when creating RTCPeerConnection.(YES, remove RtpDataChannels option if you want data channel, what a magic world!)
on receiver side: no need createDataChannel, instead, handle onmessage, onxxx by using event.channle from pc.ondatachannel callback:
pc.ondatachannel function(event)
var receiveChannel = event.channel;
receiveChannel.onmessage = function(event){
console.log("Got Data Channel Message:", event.data);
};
};

How can I pass data between two Chrome apps?

I have created two Chrome apps and I want to pass some data (string format) from one Chrome app to another Chrome app. Appreciate if someone can help me with showing the correct way of doing this?
It's an RTFM question.
From Messaging documentation (note that it mentions extensions, but it works for apps):
In addition to sending messages between different components in your extension, you can use the messaging API to communicate with other extensions. This lets you expose a public API that other extensions can take advantage of.
You need to send messages using chrome.runtime.sendMessage (using app ID) and receive them using chrome.runtime.onMessageExternal event. If required, long-lived connections can also be established.
// App 1
var app2id = "abcdefghijklmnoabcdefhijklmnoab2";
chrome.runtime.onMessageExternal.addListener(
// This should fire even if the app is not running, as long as it is
// included in the event page (background script)
function(request, sender, sendResponse) {
if(sender.id == app2id && request.data) {
// Use data passed
// Pass an answer with sendResponse() if needed
}
}
);
// App 2
var app1id = "abcdefghijklmnoabcdefhijklmnoab1";
chrome.runtime.sendMessage(app1id, {data: /* some data */},
function(response) {
if(response) {
// Installed and responded
} else {
// Could not connect; not installed
// Maybe inspect chrome.runtime.lastError
}
}
);