How to remove CC button by default from embedded Vimeo player (in vimeo api disabling texttracks option) - vimeo

I've created an embed preset and set it as the default preset for all videos I upload on Vimeo. This preset removes everything but the play button. I have embedded the video that is the basis for the preset and it works correctly.
However, when I embed new videos uploaded with this preset as default, the CC button appears as well. This means that for these videos, the embedded player has the play button and CC button and nothing else.
Interestingly, I can't even manually remove the CC button for these videos under "https://vimeo.com/manage/videos/{video_id}/customize". All options are already turned off for that area, as I would expect because of the default preset, but the CC button is still there.
I am progammatically uploading these new videos using the Python client:
client.upload(file_name, data={
'name': title})
I have a Pro account. What am I missing?

FIRST - correcting already added captions (in API its call: texttracks)
Documentation: https://developer.vimeo.com/api/upload/texttracks#uploading-a-text-track-step-1
Python example
You need to have two items:
video JSON object (to get uri to the video: video['uri']) - or directly change video['uri'] in this example to uri you have
Auth token (as auth_token)
How final function will look:
video = {"uri": "/videos/0000000"}
set_video_texttracks_inactive(video)
Get active texttracks for video:
def get_texttracks_uris(video):
texttracks_uris = []
video_url_tt = f"https://api.vimeo.com/{video['uri']}/texttracks"
headers = {
'Content-Type': 'application/json',
'Authorization': f'Bearer {auth_token}'
}
response = requests.request("GET", video_url_tt, headers=headers)
data = json.loads(response.text)
for textrack in data['data']:
if textrack['active']:
texttracks_uris.append(textrack['uri'])
print(f"Found: {len(texttracks_uris)} texttracks for video {video['uri']}")
return texttracks_uris
Disable all texttracks example function:
def set_video_texttracks_inactive(video):
texttrack_uris = get_texttracks_uris(video)
if not texttrack_uris:
print(f"No TEXTTRACKS uris found for video {video['uri']}")
else:
for texttrack_uri in texttrack_uris:
url = f"https://api.vimeo.com/{texttrack_uri}"
payload = "{ \r\n \"active\": false \r\n}"
headers = {
'Content-Type': 'application/json',
'Authorization': f'Bearer {auth_token}'
}
response = requests.request("PATCH", url, headers=headers, data=payload)
message = 'SETTING VIDEO TEXTTRACKS TO FALSE \n'
message += f'resp: {response.status_code} \n'
message += f'url: {url} \n'
print(message)
SECOND - is good to disable that feature in your account:
https://vimeo.zendesk.com/hc/en-us/articles/224968828-Captions-and-subtitles#h_01FTGQWR58905Z6HGPS6F6KYSS
Navigate to the Upload defaults section of your account settings.
By default, “Allow viewers to enable automatically generated captions” will be checked. Uncheck the box to turn this off.
Click Save.
Your videos will still be transcribed on the backend, but the captions will not be available to your viewers on future uploads unless you enable them on a specific video (read below).

Related

Properly using chrome.tabCapture in a manifest v3 extension

Edit:
As the end of the year and the end of Manifest V2 is approaching I did a bit more research on this and found the following workarounds:
The example here that uses the desktopCapture API:
https://github.com/GoogleChrome/chrome-extensions-samples/issues/627
The problem with this approach is that it requires the user to select a capture source via some UI which can be disruptive. The --auto-select-desktop-capture-source command line switch can apparently be used to bypass this but I haven't been able to use it with success.
The example extension here that works around tabCapture not working in
service workers by creating its own inactive tab from
which to access the tabCapture API and record the currently
active tab:
https://github.com/zhw2590582/chrome-audio-capture
So far this seems to be the best solution I've found so far in terms of UX. The background page provided in Manifest V2 is essentially replaced with a phantom tab.
The roundaboutedness of the second solution also seems to suggest that the tabCapture API is essentially not intended for use in Manifest V3, or else there would have been a more straightforward way to use it. I am disappointed that Manifest V3 is being enforced while essentially leaving behind Manifest V2 features such as this one.
Original Post:
I'm trying to write a manifest v3 Chrome extension that captures tab audio. However as far as I can tell, with manifest v3 there are some changes that make this a bit difficult:
Background scripts are replaced by service workers.
Service workers do not have access to the chrome.tabCapture API.
Despite this I managed to get something that nearly works as popup scripts still have access to chrome.tabCapture. However, there is a drawback - the audio of the tab is muted and there doesn't seem to be a way to unmute it. This is what I have so far:
Query the service worker current tab from the popup script.
let tabId;
// Fetch tab immediately
chrome.runtime.sendMessage({command: 'query-active-tab'}, (response) => {
tabId = response.id;
});
This is the service worker, which response with the current tab ID.
chrome.runtime.onMessage.addListener(
(request, sender, sendResponse) => {
// Popup asks for current tab
if (request.command === 'query-active-tab') {
chrome.tabs.query({active: true}, (tabs) => {
if (tabs.length > 0) {
sendResponse({id: tabs[0].id});
}
});
return true;
}
...
Again in the popup script, from a keyboard shortcut command, use chrome.tabCapture.getMediaStreamId to get a media stream ID to be consumed by the current tab, and send that stream ID back to the service worker.
// On command, get the stream ID and forward it back to the service worker
chrome.commands.onCommand.addListener((command) => {
chrome.tabCapture.getMediaStreamId({consumerTabId: tabId}, (streamId) => {
chrome.runtime.sendMessage({
command: 'tab-media-stream',
tabId: tabId,
streamId: streamId
})
});
});
The service worker forwards that stream ID to the content script.
chrome.runtime.onMessage.addListener(
(request, sender, sendResponse) => {
...
// Popup sent back media stream ID, forward it to the content script
if (request.command === 'tab-media-stream') {
chrome.tabs.sendMessage(request.tabId, {
command: 'tab-media-stream',
streamId: request.streamId
});
}
}
);
The content script uses navigator.mediaDevices.getUserMedia to get the stream.
// Service worker sent us the stream ID, use it to get the stream
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
navigator.mediaDevices.getUserMedia({
video: false,
audio: true,
audio: {
mandatory: {
chromeMediaSource: 'tab',
chromeMediaSourceId: request.streamId
}
}
})
.then((stream) => {
// Once we're here, the audio in the tab is muted
// However, recording the audio works!
const recorder = new MediaRecorder(stream);
const chunks = [];
recorder.ondataavailable = (e) => {
chunks.push(e.data);
};
recorder.onstop = (e) => saveToFile(new Blob(chunks), "test.wav");
recorder.start();
setTimeout(() => recorder.stop(), 5000);
});
});
Here is the code that implements the above: https://github.com/killergerbah/-test-tab-capture-extension
This actually does produce a MediaStream, but the drawback is that the sound of the tab is muted. I've tried playing the stream through an audio element, but that seems to do nothing.
Is there a way to obtain a stream of the tab audio in a manifest v3 extension without muting the audio in the tab?
I suspect that this approach might be completely wrong as it's so roundabout, but this is the best I could come up with after reading through the docs and various StackOverflow posts.
I've also read that the tabCapture API is going to be moved for manifest v3 at some point, so maybe the question doesn't even make sense to ask - however if there is a way to still properly use it I would like to know.
I found your post very useful in progressing my implementation of an audio tab recorder.
Regarding the specific muting issue you were running into, I resolved it by looking here: Original audio of tab gets muted while using chrome.tabCapture.capture() and MediaRecorder()
// Service worker sent us the stream ID, use it to get the stream
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
navigator.mediaDevices.getUserMedia({
video: false,
audio: true,
audio: {
mandatory: {
chromeMediaSource: 'tab',
chromeMediaSourceId: request.streamId
}
}
})
.then((stream) => {
// To resolve original audio muting
context = new AudioContext();
var audio = context.createMediaStreamSource(stream);
audio.connect(context.destination);
const recorder = new MediaRecorder(stream);
const chunks = [];
recorder.ondataavailable = (e) => {
chunks.push(e.data);
};
recorder.onstop = (e) => saveToFile(new Blob(chunks), "test.wav");
recorder.start();
setTimeout(() => recorder.stop(), 5000);
});
});
This may not be exactly what you are looking for, but perhaps it may provide some insight.
I've tried playing the stream through an audio element, but that seems to do nothing.
Ironically this is how I managed to get around the issue; by creating an object in the popup itself. When using tabCapture in the popup script, it returns the stream, and I set the audio srcObject to that stream.
HTML:
<audio id="audioObject" autoplay> No source detected </audio>
JS:
chrome.tabCapture.capture({audio: true, video: false}, function(stream) {
var audio = document.getElementById("audioObject");
audio.srcObject = stream
})
According to this post on Manifest V3, chrome.capture will be the new namespace for tabCapture and the like, but I haven't seen anything beyond that.
I had this problem too, and I resolve it by using Web Audio API. Just create a new context and conect it to a media stream source using the captures MediaStream, this is an example:
avoidSilenceInTab: (desktopStream: MediaStream) => {
var contextTab = new AudioContext();
contextTab
.createMediaStreamSource(desktopStream)
.connect(contextTab.destination);
}

Google slides returns 'The provided image is in an unsupported format' even with Authorization fields configured

The most closest thread is
https://stackoverflow.com/questions/60160794/getting-the-provided-image-is-in-an-unsupported-format-error-when-trying-to-in
But I don't want to open the image sharing it to the rest of world. So I configure with domain
file_id = uploaded_file.get('id')
drive_service.permissions().create(fileId=file_id, body={
'type': 'domain',
'domain': 'mydomain.com', # this is the domain I use the gsuite user e.g. richard#mydomain.com to login
'role': 'reader'
}).execute()
And the the slides Resource is constructed by
def build_request(http, *args, **kwargs):
import google_auth_httplib2
new_http = google_auth_httplib2.AuthorizedHttp(credentials=drive_creds)
auth_header = {
'Authorization': 'Bearer '+ drive_creds.token
}
headers = kwargs.get('headers', {})
if not headers:
kwargs['headers'] = auth_headers
else:
kwargs['headers'].update(auth_header)
http_req = googleapiclient.http.HttpRequest(new_http, *args, **kwargs)
return http_req
slides_service = build('slides', 'v1', credentials=slides_creds, requestBuilder=build_request)
When executing I can observe that kwargs are passed to HttpRequest with Authorization fields configured correctly like
{'headers': {'Authorization': 'Bearer <google drive token>', ... }
However during execution, the create image function (I am sure my create image function works correctly because once I use public accessable image e.g. google logo, there is no problem posting the image to google slide page) always returns status code 400 with the message 'The provided image is in an unsupported format'. I open a private window and paste the link, it looks like it's still redirected the request to sign in page.
Any additional steps I need to configure to get this work? Many thanks for help.
Update 1:
Code below is used to create the corresponded slide and drive Resourc based on the google doc.
slides_scopes = ['https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/drive.file',
'https://www.googleapis.com/auth/drive.readonly',
'https://www.googleapis.com/auth/presentations',
'https://www.googleapis.com/auth/spreadsheets',
'https://www.googleapis.com/auth/spreadsheets.readonly']
drive_scopes = ['https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/drive.appdata',
'https://www.googleapis.com/auth/drive.file',
'https://www.googleapis.com/auth/drive.metadata',
'https://www.googleapis.com/auth/drive.metadata.readonly',
'https://www.googleapis.com/auth/drive.photos.readonly',
'https://www.googleapis.com/auth/drive.readonly']
def auth(token_file='token.pickle', credentials_json='cred.json',scopes=[]):
creds = None
if os.path.exists(token_file):
with open(token_file, 'rb') as token:
creds = pickle.load(token)
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
credentials_json, scopes)
creds = flow.run_local_server(port=0)
with open(token_file, 'wb') as token:
pickle.dump(creds, token)
return creds
slides_creds = auth(token_file='slides_t.pickle', credentials_json='slides_t.json', scopes=slides_scopes)
drive_creds = auth(token_file='drive_t.pickle', credentials_json='drive_t.json', scopes=drive_scopes)
def build_request(http, *args, **kwargs):
import google_auth_httplib2
new_http = google_auth_httplib2.AuthorizedHttp(credentials=drive_creds)
auth_header = {
'Authorization': 'Bearer '+ drive_creds.token
}
headers = kwargs.get('headers', {})
if not headers:
kwargs['headers'] = auth_headers
else:
kwargs['headers'].update(auth_header)
http_req = googleapiclient.http.HttpRequest(new_http, *args, **kwargs)
return http_req
slides_service = build('slides', 'v1', credentials=slides_creds, requestBuilder=build_request)
drive_service = build('drive', 'v3', credentials=drive_creds)
Update 2:
When switching to serve image using local http server (it's accesiable with http://<intranet ip or localhost>:<port>/path/to/img.png, google slide api returns following error
"Invalid requests[0].createImage: There was a problem retrieving the image. The provided image should be publicly accessible, within size limit, and in supported formats."
This makes me wonder perhaps google slide API no longer allows accessing webContentLink with special perrmission (e.g. domain). Only public accessible url is allowed instead.
Update 3:
create_image function parameters:
slide_object_id: g6f1c6c22f2_1_69
web_content_link: https://drive.google.com/a/{{domain name}}/uc?id={{image id}}&export=download
size: {'height': {'magnitude': 10800, 'unit': 'EMU'}, 'width': {'magnitude': 19800, 'unit': 'EMU'}}
transform: {'scaleY': 171.9097, 'scaleX': 212.4558, 'translateY': 937125, 'translateX': 2347875, 'unit': 'EMU'}
create image function
def create_image(slides_service=None, slide_object_id=None, web_content_link=None, size_height_magnitude=4000000, size_width_magnitude=4000000, transform_scale_x=1, transform_scale_y=1, transform_translate_x=100000, transform_translate_y=100000):
requests = []
requests.append({
'createImage': {
'url': web_content_link,
'elementProperties': {
'pageObjectId': slide_object_id,
'size': {
'height': {
'magnitude': size_height_magnitude,
'unit': 'EMU'
},
'width': {
'magnitude': size_width_magnitude,
'unit': 'EMU'
}
},
'transform': {
'scaleX': transform_scale_x,
'scaleY': transform_scale_y,
'translateX': transform_translate_x,
'translateY': transform_translate_y,
'unit': 'EMU'
}
}
}
})
body = {
'requests': requests
}
response = slides_service.presentations() \
.batchUpdate(presentationId=presentation_id, body=body).execute()
return response.get('replies')[0].get('createImage')
Issue:
Only images that have a publicly accessible URL can be added to a slide. Sharing the image with the account from which the request is made is not enough. As you can see in the official documentation:
If you want to add private or local images to a slide, you'll first need to make them available on a publicly accessible URL.
Workarounds:
What you could do, as explained in Tanaike's answer, is to: (1) share the image publicly, (2) add the image to the slide using the publicly accessible URL, and (3) delete the Permission created in step 1. This way, the image is publicly accessible only during the short time the program takes to execute steps 2 and 3.
Another option, provided in the referenced documentation, is to:
upload your images to Google Cloud Storage and use signed URLs with a 15 minute TTL. Uploaded images are automatically deleted after 15 minutes.
Reference:
Slides API: Adding Images to a Slide

Ways to capture incoming WebRTC video streams (client side)

I am currently looking to find a best way to store a incoming webrtc video streams. I am joining the videocall using webrtc (via chrome) and I would like to record every incoming video stream to from each participant to the browser.
The solutions I am researching are:
Intercept network packets coming to the browsers e.g. using Whireshark and then decode. Following this article: https://webrtchacks.com/video_replay/
Modifying a browser to store recording as a file e.g. by modifying Chromium itself
Any screen-recorders or using solutions like xvfb & ffmpeg is not an options due the resources constrains. Is there any other way that could let me capture packets or encoded video as a file? The solution must be working on Linux.
if the media stream is what you want a method is to override the browser's PeerConnection. Here is an example:
In an extension manifest add the following content script:
content_scripts": [
{
"matches": ["http://*/*", "https://*/*"],
"js": ["payload/inject.js"],
"all_frames": true,
"match_about_blank": true,
"run_at": "document_start"
}
]
inject.js
var inject = '('+function() {
//overide the browser's default RTCPeerConnection.
var origPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;
//make sure it is supported
if (origPeerConnection) {
//our own RTCPeerConnection
var newPeerConnection = function(config, constraints) {
console.log('PeerConnection created with config', config);
//proxy the orginal peer connection
var pc = new origPeerConnection(config, constraints);
//store the old addStream
var oldAddStream = pc.addStream;
//addStream is called when a local stream is added.
//arguments[0] is a local media stream
pc.addStream = function() {
console.log("our add stream called!")
//our mediaStream object
console.dir(arguments[0])
return oldAddStream.apply(this, arguments);
}
//ontrack is called when a remote track is added.
//the media stream(s) are located in event.streams
pc.ontrack = function(event) {
console.log("ontrack got a track")
console.dir(event);
}
window.ourPC = pc;
return pc;
};
['RTCPeerConnection', 'webkitRTCPeerConnection', 'mozRTCPeerConnection'].forEach(function(obj) {
// Override objects if they exist in the window object
if (window.hasOwnProperty(obj)) {
window[obj] = newPeerConnection;
// Copy the static methods
Object.keys(origPeerConnection).forEach(function(x){
window[obj][x] = origPeerConnection[x];
})
window[obj].prototype = origPeerConnection.prototype;
}
});
}
}+')();';
var script = document.createElement('script');
script.textContent = inject;
(document.head||document.documentElement).appendChild(script);
script.parentNode.removeChild(script);
I tested this with a voice call in google hangouts and saw that two mediaStreams where added via pc.addStream and one track was added via pc.ontrack. addStream would seem to be local media streams and the event object in ontrack is a RTCTrackEvent which has a streams object. which I assume are what you are looking for.
To access these streams from your extenion's content script you will need to create audio elements and set the "srcObject" property to the media stream: e.g.
pc.ontrack = function(event) {
//check if our element exists
var elm = document.getElementById("remoteStream");
if(elm == null) {
//create an audio element
elm = document.createElement("audio");
elm.id = "remoteStream";
}
//set the srcObject to our stream. not sure if you need to clone it
elm.srcObject = event.streams[0].clone();
//write the elment to the body
document.body.appendChild(elm);
//fire a custom event so our content script knows the stream is available.
// you could pass the id in the "detail" object. for example:
//CustomEvent("remoteStreamAdded", {"detail":{"id":"audio_element_id"}})
//then access if via e.detail.id in your event listener.
var e = CustomEvent("remoteStreamAdded");
window.dispatchEvent(e);
}
Then in your content script you can listen for that event/access the mediastream like so:
window.addEventListener("remoteStreamAdded", function(e) {
elm = document.getElementById("remoteStream");
var stream = elm.captureStream();
})
With the capture stream available to your content script you can do pretty much anything you want with it. For example, MediaRecorder works really well for recording the stream(s) or you could use something like peer.js or maybe binary.js to stream to another source.
I haven't tested this but it should also be possible to override the local streams. For example, in the inject.js you could establish some blank mediastream, override navigator.mediaDevices.getUserMedia and instead of returning the local mediastream return your own mediastream.
This method should work in firefox and maybe others as well assuming you use an extenion/app to load the inject.js script at the start of the document. It being loaded before any of the target's libs is key to making this work.
edited for more detail
edited for even more detail
Capturing packets will only give you the network packets which you would then need to turn into frames and put into a container. A server such as Janus can record videos.
Running headless chrome and using the javascript MediaRecorder API is another option but much more heavy on resources.

Opening a PDF Blob in a new Chrome tab (Angular 2)

I am loading a PDF as follows (I am using Angular 2, but I am not sure that this matters..):
//Inside a service class
downloadPdf = (id): Observable<Blob> => {
let headers = new Headers();
headers.append("Accept", "application/pdf");
return this.AuthHttp.get(this.pdfURL + id, {
headers: headers,
responseType: ResponseContentType.Blob
}).map(res => new Blob([res.blob()], {type: "application/pdf"}));
}
//Inside a click handler
this.pdfService.downloadPdf(this.id).subscribe((data: Blob) => {
let fileURL = window.URL.createObjectURL(data);
window.open(fileURL);
});
This code runs nicely in Firefox. In Chrome, a new tab briefly flashes open and closes. When I debug and I manually put surf to the object URL, Chrome can open it just fine.
What am I doing wrong here?
The opening of a new tab got blocked by an adblocker.
It can not work, new popup will be blocked by browser, because of it was created from callback which is not a trusted event, to make it work it must be called directly from click handler, or you have to disable bloking popups in your browser.
Chrome will only allow this to work as wanted if the ajax call returns in less than a second. More there

webrtc: failed to send arraybuffer over data channel in chrome

I want to send streaming data (as sequences of ArrayBuffer) from a Chrome extension to a Chrome App, since Chrome message API (includes chrome.runtime.sendMessage, postMessage...) does not support ArrayBuffer and JS arrays have poor performance, I have to try other methods. Eventually, I found WebRTC over RTCDataChannel might a good solution in my case.
I have succeeded to send string over a RTCDataChannel, but when I tried to send ArrayBuffer I got:
code: 19
message: "Failed to execute 'send' on 'RTCDataChannel': Could not send data"
name: "NetworkError"
It seems that it's not a bandwidths limits problem since it failed even though I sent one byte of data. Here is my code:
pc = new RTCPeerConnection(configuration, { optional: [ { RtpDataChannels: true } ]});
//...
var dataChannel = m.pc.createDataChannel("mydata", {reliable: true});
//...
var ab = new ArrayBuffer(8);
dataChannel.send(ab);
Tested on OSX 10.10.1, Chrome M40 (Stnble), M42(Canary); and on Chromebook M40.
I have filed a bug for WebRTC here.
I modified my code, now everything worked amazing:
removed RtpDataChannels option when creating RTCPeerConnection.(YES, remove RtpDataChannels option if you want data channel, what a magic world!)
on receiver side: no need createDataChannel, instead, handle onmessage, onxxx by using event.channle from pc.ondatachannel callback:
pc.ondatachannel function(event)
var receiveChannel = event.channel;
receiveChannel.onmessage = function(event){
console.log("Got Data Channel Message:", event.data);
};
};