I'm trying to perform a live A/B comparison of a microphone track with and without MediaStreamTrack noiseSuppression. On Windows 10 with Firefox this works OK, but with Chrome the noiseSuppression setting fails to change when track.applyConstraints is invoked. In the following snippet, the last two logs are expected to be false, but the final one is true on Chrome. Is there a Chrome bug?
// changing audio track constraint does not change track setting in Chrome
console.log(navigator.mediaDevices.getSupportedConstraints().noiseSuppression); //true
navigator.mediaDevices.getUserMedia({ video: false, audio: { noiseSuppression: true } })
.then(stream => {
const track = stream.getTracks()[0];
console.log(track.getSettings().noiseSuppression); //true
track.applyConstraints({ noiseSuppression: false })
.then(() => {
console.log(track.getConstraints().noiseSuppression); //false
console.log(track.getSettings().noiseSuppression); //true with Chrome
})
});
Related
I'm building an extension where when the extension first starts (browser is started/extension is updated) a window is opened with a html file containing a form asking for a master password. When this master password does not match a certain string, a message is sent through chrome.runtime.sendMessage. A message is also sent the same way when the modal window is closed through the chrome.windows.onRemoved listener.
Here is my service worker:
/// <reference types="chrome-types"/>
(async () => {
console.log('Extension started. Modal opened...');
const window = await chrome.windows.create({
url: chrome.runtime.getURL("html/index.html"),
type: "popup",
width: 400,
height: 600,
});
chrome.windows.onRemoved.addListener((windowId) => {
if (windowId === window?.id) chrome.runtime.sendMessage({ monitoringEnabled: true, reason: 'tab closed' }).catch(console.log);
});
chrome.runtime.onMessage.addListener((message) => {
if (Object.hasOwn(message, 'monitoringEnabled')) {
console.log(`Monitoring ${message.monitoringEnabled ? 'enabled' : 'disabled'}. ${message.reason ? `Reason: ${message.reason}` : ''}`)
chrome.storage.local.set({ monitoringEnabled: message.monitoringEnabled });
if (window?.id) chrome.windows.remove(window.id);
}
return true;
});
})();
The html file just has a form with a button which when clicked triggers a script:
const MASTER_PASSWORD = 'some_thing_here';
document.getElementById('submit-button').addEventListener("click", (e) => {
const password = document.getElementById('master-password-text-field').value;
if (password !== MASTER_PASSWORD) return chrome.runtime.sendMessage({ monitoringEnabled: true, reason: 'invalid password' })
return chrome.runtime.sendMessage({ monitoringEnabled: false })
})
These are some logs:
The first error is when the modal tab is closed, notice that nothing happens after this (i.e onMessage listener is not triggered). However, in the second case, when a message is sent from the modal script, the onMessage listener is triggered, but the connection error still appears after the code in the listener has processed.
Not sure why this happens. I have checked multiple other threads on the same topic but none of them seem to help me. If you have a better idea on what I can do to achieve what I want right now, please suggest.
In my code, I'm sending a message to the service worker in the server worker itself. I've re-wrote my code by just making a function which is called when the windows.onRemoved event is triggered and also when a message is sent from the modal tab. The seems to have fixed my issue. This is my service worker code for reference:
/// <reference types="chrome-types"/>
console.log('Extension started. Modal opened...');
let windowId: number | null = null;
chrome.windows
.create({
url: chrome.runtime.getURL('html/index.html'),
type: 'popup',
width: 400,
height: 600
})
.then((created) => (windowId = created?.id ?? null));
chrome.windows.onRemoved.addListener((id) => {
if (id === windowId) enableMonitoring('window closed')
});
chrome.runtime.onMessage.addListener((message) => {
if (message.monitoringEnabled) {
enableMonitoring(message.reason);
}
return undefined
})
function enableMonitoring(reason: any) {
console.log('monitoring enabled', reason);
}
Here is the relevant part of the code.
I am trying to capture network calls using chrome.debugger API here. Everything is ok but once the page loads or I click a link then after the first action it closes automatically
chrome.runtime.onMessage.addListener((message, sender, sendResponse) => {
if (message.play) {
if (message.captureNtwrk) {
chrome.tabs.query({ active: true, currentWindow: true }, (tabs) => {
const tab = tabs[0];
tabId = tab.id;
chrome.debugger.attach({ tabId: tabId }, version, onAttach.bind(null, tabId));
});
}
if (message.captureScreen) {
}
if (message.captureEvent) {
}
}
if (message.stop) {
chrome.debugger.detach({ tabId: tabId });
}
});
// Function to run while debugging
const onAttach = (tabId) => {
if (chrome.runtime.lastError) console.log(chrome.runtime.lastError.message);
chrome.debugger.sendCommand({ tabId: tabId }, 'Network.enable');
chrome.debugger.onEvent.addListener(onEvent);
chrome.debugger.onDetach.addListener((source, reason) => {
console.log('Detached: ' + reason);
});
});
You can try disabling other extensions, especially 3rd party extensions added by desktop applications. One particular extension observed, which causes debugger to detach after page navigation is Adobe Acrobat.
Trying to solve the problem referenced in this article: https://philna.sh/blog/2018/10/23/service-workers-beware-safaris-range-request/
and here:
PWA - cached video will not play in Mobile Safari (11.4)
The root problem is that we aren't able to show videos on Safari. The article says it has the fix for the issue but seems to cause another problem on Chrome. A difference in our solution is that we aren't using caching. Currently we just want to pass through the request in our service worker. Implementation looks like this:
self.addEventListener('fetch', function (event){
if (event.request.cache === 'only-if-cached' && event.request.mode !== 'same-origin') {
return;
}
if (event.request.headers.get('range')) {
event.respondWith(returnRangeRequest(event.request));
} else {
event.respondWith(fetch(event.request));
}
});
function returnRangeRequest(request) {
return fetch(request)
.then(res => {
return res.arrayBuffer();
})
.then(function(arrayBuffer) {
var bytes = /^bytes\=(\d+)\-(\d+)?$/g.exec(
request.headers.get('range')
);
if (bytes) {
var start = Number(bytes[1]);
var end = Number(bytes[2]) || arrayBuffer.byteLength - 1;
return new Response(arrayBuffer.slice(start, end + 1), {
status: 206,
statusText: 'Partial Content',
headers: [
['Content-Range', `bytes ${start}-${end}/${arrayBuffer.byteLength}`]
]
});
} else {
return new Response(null, {
status: 416,
statusText: 'Range Not Satisfiable',
headers: [['Content-Range', `*/${arrayBuffer.byteLength}`]]
});
}
});
}
We do get an array buffer returned on the range request fetch but it has a byteLength of zero and appears to be empty. The range header actually contains "bytes=0-" and subsequent requests have a start value but no end value.
Maybe there is some feature detection we can do to determine that it's chrome and we can just call fetch regularly? I'd rather have a solution that works everywhere though. Also res is showing type:"opaque" so maybe that has something to do with it? Not quite sure what to look at next. If we can't solve the problem for Chrome I might need a different solution for Safari.
It seems that it was the opaque response. I didn't realize that fetch was 'nocors' by default. Adding 'cors' mode and overwriting the range header seems to have allowed the rewrite to work on chrome. Sadly, it still doesn't work on Safari, but I was able to access the arrayBuffer after setting the cors values properly.
Here is the change I had to make:
var myHeaders = {};
return fetch(request, { headers: myHeaders, mode: 'cors', credentials: 'omit' })
.then(res => {
return res.arrayBuffer();
})
It's important that the server respond with allowed headers. e.g.
access-control-allow-methods: GET
access-control-allow-origin: *
I am trying to save Video recorded through Video.js to save on server, below is my code
<script>
var player = videojs("myVideo",
{
controls: true,
width: 320,
height: 240,
plugins: {
record: {
audio: true,
video: true,
maxLength: 41,
debug: true
}
}
});
player.on('startRecord', function()
{
console.log('started recording!');
});
player.on('finishRecord', function()
{
console.log('finished recording: ', player.recordedData);
});
function uploadFunction()
{
**//WRITE CODE TO SAVE player.recordedData.video in specified folder//**
}
</script>
Live Implementation : https://www.propertybihar.com/neo/videxp1/index.html
I was going through one the previously asked question, dint worked for me
How can javascript upload a blob?
If you scroll down to the "Upload" section on the README, you'll see this code snipped that does what you want, except for a streaming application:
var segmentNumber = 0;
player.on('timestamp', function() {
if (player.recordedData && player.recordedData.length > 0) {
var binaryData = player.recordedData[player.recordedData.length - 1];
segmentNumber++;
var formData = new FormData();
formData.append('SegmentNumber', segmentNumber);
formData.append('Data', binaryData);
$.ajax({
url: '/api/Test',
method: 'POST',
data: formData,
cache: false,
processData: false,
contentType: false,
success: function (res) {
console.log("segment: " + segmentNumber);
}
});
}
});
That is configured for continuously uploading the data but I've found I've had to make a few changes to it for my own setup:
On Chrome 64 with VideoJS 6.7.3 and VideoJS-Record 2.1.2 it seems that player.recordedData is not an array but just a blob.
I wanted to upload the video at a particular time, not streaming so I trigger the upload myself.
As a result, my upload code looks something like this:
if (player.recordedData) {
var binaryData = player.recordedData.video;
// ... Rest of that FormData and $.ajax snippet from previous snippet
}
If I don't do it this way, that check for existing data to upload always fails. I also trigger this code manually, rather than attaching it to the "timestamp" event of the player object. Of course, you'll need to have server side code that will accept this upload.
I'v got jwplayer source code from github. I want to change some scripts and built the player. So, i need to change file source from javascript in flash. In javascript i'm setting some params "host" and "flv_id":
jwplayer("mediaplayer").setup({
autostart: false,
controlbar: "none",
displayclick: "none",
smoothing: true,
stretching: "exactfit",
icons: false,
flashplayer: "/jwplayer.swf",
file: "/videos/3aae1ef41d.flv",
flv_id: "115554",
host: "<?php echo $host; ?>",
provider: "http",
startparam: "start",
height: 400,
width: 650,
events: {
onComplete: function() {
},
onPause: function(event) {
},
onError: function() {
}
}
});
In flash i have some class, which can make post-request:
var post:Post = new Post("http://"+someparameters["host"]+"/video/flv");
post.variables.id = someparameters["flv_id"];
post.Send(Go);
Go - is success callback function that returns some flvlink.
Go(link:String):void {
//link - is source, that i need to play
}
The player is playing the "/videos/3aae1ef41d.flv". But i want to play the source from Go(); I have class "Post", but i don't know the place to paste my code. Now, i haven't any changes in default source code. I don't know which file of player source code to edit. So, i need to know, how i can use my "Post" class to play video from Go function.