Chromecast - How to reconnect to the session after page refresh? - google-chrome

After reloading the page, method
cast.framework.CastContext.getInstance()
returns status 'NOT_CONNECTED' and 'NO_SESSION'
My code example:
const castContext = window.cast.framework.CastContext.getInstance();
castContext.setOptions({
receiverApplicationId:
window.chrome.cast.media.DEFAULT_MEDIA_RECEIVER_APP_ID,
autoJoinPolicy: window.chrome.cast.AutoJoinPolicy.ORIGIN_SCOPED,
resumeSavedSession: true
});
await castContext.requestSession(); // wait for prompt
const castSession = castContext.getCurrentSession();
const mediaInfo = new window.chrome.cast.media.MediaInfo(mediaUrl);
const request = new window.chrome.cast.media.LoadRequest(mediaInfo);
await castSession.loadMedia(request);
window.player = new window.cast.framework.RemotePlayer();
window.playerController = new window.cast.framework.RemotePlayerController(
window.player);
Can you please tell me how to connect to an existing session and receive information about playing media?

I been looking around for answers for a while and your switch of port make me think.
Switching port not a valid solution!
Why does a solution suddenly stop working without any changes to my Chromecast code?
Answer: Chrome got the hiccups. I guess this is either because of several live-refresh or something went wrong during development and it has been cached?!
Solution: I cleared the application storage and deleted my Chrome profile. After a restart of Chrome my solution reconnects with the chromecast as it did before.

Related

Google login fails with HTTP error 400 saying "Sorry, something went wrong there. Try again."

Description/background
I had set up a script which opened a Google site of our company in Google Chrome (not headless) and did some automated work on that page. The login information had to be refreshed occasionally what for I manually logged in. That had been working perfectly the last couple of months until last week. Today I noticed that I get the above mentioned error message as a result of a server response with HTTP status 400 upon entering my Gmail address and clicking the Next button.
Steps to reproduce
Puppeteer version: 2.0.0
Platform / OS version: Windows 10
URLs (if applicable): https://sites.google.com/...
Node.js version: v12.13.0
What steps will reproduce the problem?
Run a Puppeteer script to open a Google Site which requires login.
(async () => {
try {
const browser = await puppeteer.launch({headless: false, userDataDir: "<ProfileDirectory>"});
const pageLogin = await browser.newPage();
await pageLogin.goto('https://sites.google.com/...', {waitUntil: 'networkidle2'});
...
await browser.close();
}
catch (error) {
console.log(error.stacktrace);
}
})();
Manually enter Gmail address and click Next.
Get error message "Sorry, something went wrong there. Try again." as a result of a server response with HTTP status code 400.
Update:
Manually opening Chrome (same userDataDir) and the respective Google site still works as usual.
Recommend to use playwright/puppeteer + Firefox. It seems like google adds something into chrome so they can detect the browser is automated or not
One of the comments on this post mentions that Google tries to block logins with Puppeteer, Selenium etc. this might be why you are getting a 400 error.
One of the recent comments on the aforementioned post, links a gist with some example code that might still work, haven't tried it though.
While I was doing research on Puppeteer for Firefox, I noticed that (1) Puppeteer downloads its own local Google Chrome binaries it is executing and (2) my installed Puppeteer version 2.0.0 was outdated. Meaning, the browser actually used by Puppeteer was probably also outdated. The solution was as easy as to update Puppeteer to the latest version 2.1.1.

Programmatically start the performance profiling in Chrome

Is there a way to start the performance profiling programmatically in Chrome?
I want to run a performance test of my web app several times to get a better estimate of the FPS but manually starting the performance profiling in Chrome is tricky because I'd have to manually align the frame models. (I am using this technique to extract the frames)
CMD + Shift + E reloads the page and immediately starts the profiling, which alleviates the alignment problem but it only runs for 3 seconds as explained here. So this doesn't work.
Ideally, I'd like to click on a button to start my test script and also starts the profiling. Is there a way to achieve that?
in case you're still interested, or someone else may find it helpful, there's an easy way to achieve this using Puppeteer's tracing class.
Puppeteer uses Chrome DevTools Protocol's Tracing Domain under the hood, and writes a JSON file to your system that can be loaded in the dev tools performance panel.
To get a profile trace of your page's loading time you can implement the following:
const puppeteer = require('puppeteer');
(async () => {
// launch puppeteer browser in headful mode
browser = await puppeteer.launch({
headless: false,
devtools: true
});
// start a page instance in the browser
page = await browser.newPage();
// start the profiling, with a path to the out file and screenshots collected
await page.tracing.start({
path: `tests/logs/trace-${new Date().getTime()}.json`,
screenshots: true
});
// go to the page
await page.goto('http://localhost:8080');
// wait for as long as you want
await page.waitFor(4000);
// or you can wait for an element to appear with:
// await page.waitForSelector('some-css-selector');
// stop the tracing
await page.tracing.stop();
// close the browser
await browser.close();
})();
Of course, you'll have to install Puppeteer first (npm i puppeteer). If you don't want to use Puppeteer you can interact with Chrome DevTools Protocol's API directly (see link above). I didn't investigate that option very much since Puppeteer delivers a high level and easy to use API over CDP's API. You can also interact directly with CDP via Puppeteer's CDPSession API.
Hope this helps. Good luck!
You can use the chrome devtools protocol and use any driver library from here https://github.com/ChromeDevTools/awesome-chrome-devtools#protocol-driver-libraries to programmatically create a profile.
Use this method - https://chromedevtools.github.io/devtools-protocol/tot/Profiler#method-start to start a profile.

unable to update currentTime of html5 audio in chrome, Works in firefox and edge

audio files are unseekable in chrome, they don't work altogether in opera. However, I am able to see the audio file using firefox and Microsoft edge. I read on StackOverflow that I have to enable byte range support on the server. How I can do that and why it's working in firefox and edge if byte range is disabled. And I dont get any errors in console. Just that the file starts from the beginning whenever I update the audioElement.currentTime
Here is the code that I am using to seek the audio file
$(".progress2").on("click", function (event) {
console.log("clicked")
var offset = $(this).offset();
var left = (event.pageX - offset.left);
var totalWidth = $("#custom-seekbar").width();
var percentage = ( left / totalWidth );
var vidTime = audioElement.duration * percentage;
audioElement.currentTime = vidTime;
})
<p id="custom-seekbar" class="progress2"><span></span></p>
I am on a windows server, using xampp and Laravel. Spent a lot of time on this, let me know how I can get this working or if there are any alternatives to this.
After wasting some time, I came to know that its the issue of server with byte ranges. I was running laravel server so used curl to verify if server handles byte ranges
curl -I http://i.imgur.com/z4d4kWk.jpg
HTTP/1.1 200 OK
...
Accept-Ranges: bytes
Content-Length: 146515
checked with laravel server, it was not working then checked with a laravel project running from xampp directly and it was working. Just leaving my answer here so if anyone else stumbles upon this in the future, it might save you some time.
My live server is also on xampp so I had no reason to fix the bytes ranges issue within laravel.

Interrupted downloads when downloading a file from Web Api (remote host closed error 0x800704CD)

I have read near 20 other posts about this particular error, but most seem to be issues with the code calling Response.Close or similar, which is not our case. I understand that this particular error means that typically a user browsed away from the web page or cancelled the request midway, but in our case we are getting this error without cancelling a request. I can observe the error just after a few seconds, the download just fails in the browser (both Chrome and IE, so it's not browser specific).
We have a web api controller that serves a file download.
[HttpGet]
public HttpResponseMessage Download()
{
//
// Enumerates a directory and returns a Read-only FileStream of the download
var stream = dataProvider.GetServerVersionAssemblyStream(configuration.DownloadDirectory, configuration.ServerVersion);
if (stream == null)
{
return new HttpResponseMessage(HttpStatusCode.NotFound);
}
var response = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(stream)
};
response.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment");
response.Content.Headers.ContentDisposition.FileName = $"{configuration.ServerVersion}.exe";
response.Content.Headers.ContentType = new MediaTypeHeaderValue(MediaTypeNames.Application.Octet);
response.Content.Headers.ContentLength = stream.Length;
return response;
}
Is there something incorrect we are doing in our Download method, or is there something we need to tweak in IIS?
This happens sporadically. I can't observe a pattern, it works sometimes and other times it fails repeatedly.
The file download is about 150MB
The download is initiated from a hyperlink on our web page, there is no special calling code
The download is over HTTPS (HTTP is disabled)
The Web Api is hosted on Azure
It doesn't appear to be timing out, it can happen just after a second or two, so it's not hitting the default 30 second timeout values
I also noticed I can't seem to initiate multiple file downloads from the server at once, which is concerning. This needs to be able to serve 150+ businesses and multiple simultaneous downloads, so I'm concerned there is something we need to tweak in IIS or the Web Api.
I was able to finally fix our problem. For us it turned out to be a combination of two things: 1) we had several memory leaks and CPU intensive code in our Web Api that was impacting concurrent downloads, and 2) we ultimately resolved the issue by changing MinBytesPerSecond (see: https://blogs.msdn.microsoft.com/benjaminperkins/2013/02/01/its-not-iis/) to a lower value, or 0 to disable. We have not had an issue since.

Actionscript services stop functioning

I have built a complex AIR application which has been successfully running for quite some time of many PCs. Unfortunately, I have a plaguing problem with internet connectivity and I was wondering if anyone had encountered this issue before.
Every once in a while, the program will completely stop talking to the internet (all services start faulting). I wrote special code in my program to monitor the situation in which I use two different services to contact the same server.
The first service:
var req:URLRequest = new URLRequest("myURL.com");
this.urlMonitor = new URLMonitor(req, [200, 304]); // Acceptable status codes
this.urlMonitor.pollInterval = 60 * 1000; // Every minute
this.urlMonitor.addEventListener(StatusEvent.STATUS, onStatusChange);
this.urlMonitor.start();
private function onStatusChange(e:StatusEvent):void
{
if (this.urlMonitor.available)
{
pollStatusOnline = true;
Online = true;
}
else
{
pollStatusOnline = false;
Online = false;
}
}
The secondary method is a normal HTTP Service call:
checkInService = new HTTPService();
checkInService.method = "POST";
checkInService.addEventListener(ResultEvent.RESULT,sendResult);
checkInService.addEventListener(FaultEvent.FAULT, faultResult);
checkInService.addEventListener(InvokeEvent.INVOKE, invokeAttempt);
checkInService.url = "myURL.com";
checkInService.concurrency = Concurrency.LAST;
checkInService.send(params);
These two services point to the same location and work 98% of the time. Sometimes, after a few hours, I have noticed that both services no longer can connect to the website. The HTTP Service returns a StatusCode 0. I am able to open command prompt and ping the server directly with no problem from the PC which is failing. The services will not function again until the program is restarted.
I have been working on this issue for many months now without resolution. If anyone is able to even point me in a somewhat possible, maybe this might be the problem, possibly, direction, I would really appreciate it.
Thank you in advance.
Check the code value of the StatusEvent you receive from the URLMonitor - this might give more info than the HTTPService (you might also want to try passing a null value to URLMonitor constructor, to widen the acceptable status codes).
If you have access to the server(s?) in question, check their logs. Could the server config have changed such that it might now consider such frequent requests as flooding?
You should also be able to use an HTTP debugger like Fiddler or Charles on the client machine to see more information about the requests going out of your application.