I am trying to build an extension that would notify a user when new version of Chrome is available.
I tried to inspect network traffic when Chrome is checking for an update and it is sending a request to http://74.125.95.113/service/update2?w=3:{long_encoded_string} page that returns XML with information I need:
<?xml version="1.0" encoding="UTF-8"?>
<gupdate xmlns="http://www.google.com/update2/response" protocol="2.0" server="prod">
<daystart elapsed_seconds="31272"/>
<app appid="{8A69D345-D564-463C-AFF1-A69D9E530F96}" status="ok">
<updatecheck status="noupdate"/>
<ping status="ok"/>
</app>
</gupdate>
Besides sending {long_encoded_string} as URL parameter it is also sending some encoded cookie.
Maybe someone familiar with Chrome build process can shed some light on those encoded strings and how to build them? Maybe there is another easier way (I have a feeling that string encoding is a dead end for me)?
Google Chrome uses omaha to do updates to clients. The protocol is described here: http://omaha.googlecode.com/svn/wiki/cup.html. One thing you have to notice is that Google Chrome automatically downloads the update to your computer and then notifies (via icon on the tool menu). Unless you force check by opening the about dialog (in Windows).
As you have noticed, the Chrome GUID is {8A69D345-D564-463c-AFF1-A69D9E530F96}
The best way to see how Google Chrome is updating is to check the source code which is public. Google just ifdefs their version of Chrome in Chromium.
Google Chrome Code
The base class where all the updates happen is in UpgradeDetector, it basically checks for an upgrade every 1 hour for the dev channel and once a day for all the other channels and builds (stable / beta). The Chromium way to do a scheduled events is through Tasks, in this case it is caleld a DetectUpgradeTask which checks for a specific BrowserDistribution::GetSpecificDistribution. There are many browser distributions, for Windows it is called GoogleChromeDistribution which is in charge to figure out what the version that needs to be update..
So why am I saying all this, Google Chrome is just querying registry settings and local files to figure out if a new update exists. The the UpgradeDetector just compares the distributions if they are the same. The implies that Omaha does the whole update mechanism. And the best part to figure out what omaha does is to look at their omaha update protocol.
The Omaha Protocol
From quickly glancing at the protocol, the approach your taking is the correct one, but you have to figure out the public key. In this case, the w, which differs for every request. You can read more about this in the "Protocol observations" in the omaha update protocol. It does this to do a securely check for download updates and they do this to protect the communication. They want the connection that checks for updates to be authentic and fresh so an attacker cannot replace or modify the message nor trick the client to upgrading a vulnerable version.
So what now
It isn't just a simple request to the server to do an update check. The Omaha client protocol provides an alternative to SSL for update checks and does client-server requests to see if its a valid connection. They are doing all this to protect the communication as explained before.
Unfortunately I don't think there is a "Chrome Extension" HTML'sh way to do this unless you implement that handshake yourself using NPAPI. Don't take my word for granted, I might be totally wrong :) Unless you can do the handshake all through XHR requests.
Since you want to check if Chrome has been updated and not installed, you have to verify that a new distribution has been downloaded as explained above in the code GoogleChromeDistribution which definitely requires NPAPI to read the registry.
Related
I have a web application and we are calling a third party to process some data. Once it's done, the third party will redirect back to my application (It's a post redirection). To keep the session, we are using cookies. After the google chrome update, where the default values for samesite=Lax, I've updated our cookies to pass as samesite=None; Secure to overcome this issue. Now after google chrome version 91, this implementation is not working and I'm getting a session expiry issue. Can somebody help to fix this issue for google chrome version 91 and after? I'm using java
The best that we have been able to come up with is a client side meta refresh. When the third party posts back to our application, we have a page filter that will send it to a "refreshMeta" page similar to https://www.w3.org/TR/WCAG20-TECHS/H76.html. This has to happen without calling .getSession() anywhere because that will cause a new session to be created. This causes the page to refresh in the browser and send all original cookies back to the server because its coming from the same domain and a new session wasnt created.
I will say this worked for a while but it looks like there was change in Tomcat that's preventing this approach from working like it did on earlier versions, which is why I'm back looking for another solution.
We have an in-house (.Net) application that runs on our corporate desktops. It runs a small web server listening on for HTTP requests on a specific port on localhost. We have a separate HTTPS website that communicates with this application by setting the ImageUrl of a hidden image to the URL of the - this invokes an HTTP request to localhost, which the application picks up on and actions. For example, the site will set the URL of the image to:
http://127.0.0.1:5000/?command=dostuff
This was to work around any kind of "mixed content" messages from the site, as images seemed to be exempt from mixed-content rules. A bit of a hack but it worked well.
I'd seen that Chrome was making moves towards completely blocking mixed content on pages, and sure enough Chrome 87 (currently on the beta channel) now shows these warnings in the Console:
Mixed Content: The page at 'https://oursite.company.com/' was loaded
over HTTPS, but requested an insecure element
'http://127.0.0.1:5000/?command=dostuff'. This request was
automatically upgraded to HTTPS, For more information see
https://blog.chromium.org/2019/10/no-more-mixed-messages-about-https.html
However, despite the warning saying the request is being automatically upgraded, it hasn't been - the application still gets a plain HTTP request and continues to work normally.
I can't find any clear guidance on whether this warning is a "soft fail", and whether future versions of Chrome will enforce the auto-upgrade to HTTPS (which would break things). We have plans to replace the application in the longer term, but I'd like to be ahead of anything that will suddenly stop the application from working before then.
Will using HTTP to localhost for images and other mixed content, as used in the scenario above, be an actual issue in the future?
This answer will focus on your main question: Will using HTTP to localhost for images and other mixed content, as used in the scenario above, be an actual issue in the future?
The answer is yes.
The blog post you linked to says:
Update (April 6, 2020): Mixed image autoupgrading was originally scheduled for Chrome 81, but will be delayed until at least Chrome 84. Check the Chrome Platform Status entry for the latest information about when mixed images will be autoupgraded and blocked if they fail to load over https://.
That status entry says:
In developer trial (Behind a flag) (tracking bug) in:
Chrome for desktop release 86
Chrome for Android release 86
Android WebView release 86
…
Last updated on 2020-11-03
So this feature has been delayed, but it is coming.
Going through your question and all comments - and putting myself in your shoes, I would do the following:
Not messing with either the currently working .Net app/localhost server (HTTP), nor the user-facing (HTTPS) front-end.
Write a simple/cheap cloud function (GCP Cloud Function or AWS Lambda) to completely abstract away your .Net app from the front-end. Your current HTTPS app would only call the cloud function (HTTPS to HTTPS - not having to pray anymore that Google will not shut-down mixed traffic, which will happen eventually, although nobody knows when).
The cloud function would simply temporarily copy the image/data coming from the (insecure) .Net app to cloud storage and then serve it straight away through HTTPS to your client-side.
I have been developing a chrome extension and now I want to create version 2 with database support. For this I am going to use Firebase.
I need to create a file in my extension where I add the details about the firebase connection, api key and url etc..
After reading this question: Where does Chrome store extensions?
I went to look for mine. and boom there it was. I could open the files for the extensions and see the details I added about my firebase connection.
However, I am unsure as to whether I could see it because I am the developer and its on my machine. I do not like the idea of having my access keys available nor url.
Not that I think anyone would sabotage it, but I would hate to be billed for over usage of my requests etc..
I'm trying to record a test with Jmeter for https://maps.google.com using JMeter's Test script recorder proxy. However I got "Your connection is not private" error and it doesn't show "proceed to https://maps.google.com" option like usual
Anyone knows how to proceed. Thanks
First of all I would suggest to reconsider the whole test scenario as:
Leave Google Maps load testing to Google engineers
If you attempt to launch a load test against Google Maps - you'll get banned.
Even if your application uses Google Maps in frame for something - you should exclude it from scope as it isn't something you can control even if you won't be happy with the performance.
Just in case if you still need it for any reason you can try the following workarounds:
Under chrome://settings/
Pricacy -> Clear browser data
HTTPS/SSL - > Uninstall JMeter certificate
Try using less "paranoid" browser i.e.Firefox which uses its own certificates and proxy settings
There is an alternative way of recording a JMeter test which doesn't not require setting up proxies and worrying about SSL certificates - JMeter Chrome Extension
Your ip is being changed multiple times due to the proxy so it's making Google think a hacker is ip spoofing a site. Just change the https:// in the url to http://.
Basically what I need is a way to automatize the result of the following operations:
open a new tab;
open the Network tab in the developer tools;
load an URL;
select "Save All as HAR".
Often, proposed solutions involves the use of PhantomJS, browsermob-proxy, or pcap2har; those won't fit my case since I need to work with SPDY traffic.
I tried to dive into the Google Chrome Extensions API and indeed I managed to automatize some tasks, but still no luck for what concerns the HAR files generation. Now this method is particularly promising but I still can't figure out how would I use it.
In other words, I need something like this experiment from the Google guys. Note the following:
We used Chrome's remote debugging interface with a custom client that starts up the browser on the phone, clears its cache and other state, initiates a web page load, and receives the Chrome developer tools messages to determine the page load times and other performance metrics.
Any ideas?
Solution
For the curious, I ended up with a Node.js module that automates such kind of tests: chrome-har-capturer. This also gave me the opportunity to dig deeper into the Remote Debugging Protocol and to write a lower-level Node.js interface for general-purpose Chrome automation: chrome-remote-interface.
The short answer is, there is no way to get at the data you are after directly. The getHAR method is only applicable to extensions meant to extend DevTools itself. The good news is, you can construct the HAR file yourself without too much trouble - this is exactly what phantom.js does.
Start Chrome with remote debugging
Connect to Chrome on the debugging port with a websocket connection
Enable "Network" debugging, you can also clear cache, etc - see Network API.
Tell the browser to navigate to the page you want to capture, and Chrome will stream all the request meta-data back to you.
Massage the network data into HAR format, ala phantom.js
...
Profit.
For a head start, I have a post that with sample Ruby code that should you get started with steps 1-4: http://www.igvita.com/2012/04/09/driving-google-chrome-via-websocket-api/
By now there's a browser plugin to do that: https://github.com/devtools-html/har-export-trigger
It uses the WebExtensions DevTools API and I got it to work with both Firefox and Chrome.
See my code for Chrome here: https://github.com/theri/web-measurement-tools/blob/master/load/load_url_using_chrome.py#L175
Automatically installing the plugin in Chrome is a bit more complicated than in Firefox, but feasible - I extracted the plugin archive locally and then link to it in chrome_prefs.json (see same repository).
Not sure if it helps, HAR Recorder uses chrome debug protocol to record HAR and generate a har file (without opening devtools). If you want a variation, you can fork and make changes on it.