How can I make HTTP request wait before continuing - google-chrome

I'm developing an extension for Google Chrome and I'm monitoring HTTP requests. In the event handler for chrome.webRequest.onHeadersReceived I'm trying to make a delay. It cannot wait asynchronously (unlike WebExtensions in Firefox) and it doesn't support something like Thread.Sleep or CriticalSection or ResetEvent or anything. The only solution that I see is spin waiting which is a very bad choice. Even synchronous XMLHTTPRequest is deprecated and doesn't work.
var headersReceived = function (e) {
/// ?????? some method to delay synchronously
return {cancel: false};
};
chrome.webRequest.onHeadersReceived.addListener(headersReceived,
{urls: ["*://*/*"]},
["blocking", "responseHeaders"]);

You can try and reference to this plugin: network-spinner-devtool it is a browser devtools extension, with capacity of URL level configuration and control, to allow introducing delay before sending http request or after receiving response(support in firefox only).
it supports both Chrome and Firefox browser
Can install from Chrome web store as well Chrome DevTools

Related

Programmatically start the performance profiling in Chrome

Is there a way to start the performance profiling programmatically in Chrome?
I want to run a performance test of my web app several times to get a better estimate of the FPS but manually starting the performance profiling in Chrome is tricky because I'd have to manually align the frame models. (I am using this technique to extract the frames)
CMD + Shift + E reloads the page and immediately starts the profiling, which alleviates the alignment problem but it only runs for 3 seconds as explained here. So this doesn't work.
Ideally, I'd like to click on a button to start my test script and also starts the profiling. Is there a way to achieve that?
in case you're still interested, or someone else may find it helpful, there's an easy way to achieve this using Puppeteer's tracing class.
Puppeteer uses Chrome DevTools Protocol's Tracing Domain under the hood, and writes a JSON file to your system that can be loaded in the dev tools performance panel.
To get a profile trace of your page's loading time you can implement the following:
const puppeteer = require('puppeteer');
(async () => {
// launch puppeteer browser in headful mode
browser = await puppeteer.launch({
headless: false,
devtools: true
});
// start a page instance in the browser
page = await browser.newPage();
// start the profiling, with a path to the out file and screenshots collected
await page.tracing.start({
path: `tests/logs/trace-${new Date().getTime()}.json`,
screenshots: true
});
// go to the page
await page.goto('http://localhost:8080');
// wait for as long as you want
await page.waitFor(4000);
// or you can wait for an element to appear with:
// await page.waitForSelector('some-css-selector');
// stop the tracing
await page.tracing.stop();
// close the browser
await browser.close();
})();
Of course, you'll have to install Puppeteer first (npm i puppeteer). If you don't want to use Puppeteer you can interact with Chrome DevTools Protocol's API directly (see link above). I didn't investigate that option very much since Puppeteer delivers a high level and easy to use API over CDP's API. You can also interact directly with CDP via Puppeteer's CDPSession API.
Hope this helps. Good luck!
You can use the chrome devtools protocol and use any driver library from here https://github.com/ChromeDevTools/awesome-chrome-devtools#protocol-driver-libraries to programmatically create a profile.
Use this method - https://chromedevtools.github.io/devtools-protocol/tot/Profiler#method-start to start a profile.

How to get the Request Headers using the Chrome Devtool Protocol

The new chrome versions 72+ does not send the requestHeaders .
there was a solution:
DevTools Protocol network inspection is located quite high in the network stack. This architecture doesn't let us collect all the headers that are added to the requests. So the ones we report in Network.requestWillBeSent and Network.requestIntercepted are not complete; this will stay like this for the foreseeable future.
There are a few ways to get real request headers:
• the crude one is to use proxy
• the more elegant one is to rely on Network.responseReceived DevTools protocol event. The actual headers are reported there as requestHeaders field in the Network.Response.
This worked fine with the old chromes but not with the last versions. here is a small summery I made for the versions a coulded test
a solution for chrome v67 was to add this flags to disable Site Isolation :
chrome --disable-site-isolation-trials --disable-features=IsolateOrigins,site-per-process --disable-web-security
Now all of this does not work with the last chrome v73
maybe it is caused by this:
Issue 932674: v72 broke devtools request interception inside cross-domain iframes
you can use Fetch protocol domain that is available since m74
the solution gaven does not work neither, the Fetch.requestPaused does not contain the request headers...
I found some info that maybe causes that:
DevTools: do not expose raw headers for cross-origin requests
DevTools: do not report raw headers and cookies for protected subresources. In case subresource request's site needs to have its document protected, don't send raw headers and cookies into the frame's renderer.
or it is caused when it is an HTTP/2 server?
Does the HTTP/2 header frame factor into a response’s encodedDataLength? (Remote Debugging Protocol)
...headersText is undefined for HTTP/2 requests
link
1- How can I get the Request Headers using the Chrome Devtool Protocol with chrome v73+?
2- Can a webextension solve that?
3- Is there another way which will be stable and last longuer? like tshark+sslkeylogfile which I'm attempting to avoid. thank you

Websocket communication with multiple Chrome Docker containers

I have a Chrome container (deployed using this Dockerfile) that renders pages on request from an App container.
The basic flow is:
App sends an http request to Chrome and in response receives a websocket url to use (e.g. ws://chrome.example.com:9222/devtools/browser/13400ef6-648b-4618-8e4c-b5c73db2a122)
App then uses that websocket url to communicate further with Chrome, and to receive the rendered page. I am using the puppeteer library to connect to and communicate with the Chrome instance, using puppeteer.connect({ browserWSEndpoint: webSocketUrl });
For a single Chrome container this works really well.
But I'm trying to scale things up to have multiple Chrome containers in a Docker swarm.
The problem is, I think, that the websocket url received by App is specific to the instance running in that particular Chrome container, so when it is used by App (and where there are now multiple Chrome containers), the websocket requests from App will not necessarily be routed to the right Chrome container.
What is the best way of dealing with this?
You’ve got the basic design correct, but the issue you’re experiencing is with session “stickiness”. However, instead of trying to re-route subsequent requests back to the appropriate machine, we should look for a way to avoid the "pre" request.
The best way to do that is to have your Chrome docker image man-in-the-middle all http “upgrade” requests. This http action is what all WebSocket connections emit prior to changing protocols including the puppeteer library (which is just a WebSocket client under-the-hood). Doing this will also obviate the need for a pre-connect call since the proxying to Chrome will happen on upgrade vs exposing a URL for the app to use. Here's a pretty basic example of doing this with the http-proxy module:
const http = require('http');
const httpProxy = require('http-proxy');
const proxy = new httpProxy.createProxyServer();
http
.createServer()
.on('upgrade', async(req, socket, head) => {
const browser = await puppeteer.launch();
const target = browser.wsEndpoint();
proxy.ws(req, socket, head, { target })
})
.listen(3000);
There's other benefits with this approach as will: you can limit things like concurrency and even inject scripts to be ran at a later time. Those require a little more though and preparation, but the overall idea remains the same. This also makes load-balancing trivial since there's not need to make routing sticky.
If this is something you're interested in implementing all that works is largely done for you in the browserless repo. It even allows for things like concurrency limitations, session time limitations, and includes a feature-rich IDE. You can find more docs on that project here.

Edit and replay XHR chrome/firefox etc?

I have been looking for a way to alter a XHR request made in my browser and then replay it again.
Say I have a complete POST request done in my browser, and the only thing I want to change is a small value and then play it again.
This would be a lot easier and faster to do directly in the browser.
I have googled a bit around, and haven't found a way to do this in Chrome or Firefox.
Is there some way to do it in either one of those browsers, or maybe another one?
Chrome :
In the Network panel of devtools, right-click and select Copy as cURL
Paste / Edit the request, and then send it from a terminal, assuming you have the curl command
See capture :
Alternatively, and in case you need to send the request in the context of a webpage, select "Copy as fetch" and edit-send the content from the javascript console panel.
Firefox :
Firefox allows to edit and resend XHR right from the Network panel. Capture below is from Firefox 36:
Chrome now has Copy as fetch in version 67:
Copy as fetch
Right-click a network request then select Copy > Copy As Fetch to copy the fetch()-equivalent code for that request to your clipboard.
https://developers.google.com/web/updates/2018/04/devtools#fetch
Sample output:
fetch("https://stackoverflow.com/posts/validate-body", {
credentials: "include",
headers: {},
referrer: "https://stackoverflow.com/",
referrerPolicy: "origin",
body:
"body=Chrome+now+has+_Copy+as+fetch_+in+version+67%3A%0A%0A%3E+Copy+as+fetch%0ARight-click+a+network+request+then+select+**Copy+%3E+Copy+As+Fetch**+to+copy+the+%60fetch()%60-equivalent+code+for+that+request+to+your+clipboard.%0A%0A&oldBody=&isQuestion=false",
method: "POST",
mode: "cors"
});
The difference is that Copy as cURL will also include all the request headers (such as Cookie and Accept) and is suitable for replaying the request outside of Chrome. The fetch() code is suitable for replaying inside of the same browser.
Updating/completing zszep answer:
After copying the request as cUrl (bash), simply import it in the Postman App:
My two suggestions:
Chrome's Postman plugin + the Postman Interceptor Plugin. More Info: Postman Capturing Requests Docs
If you're on Windows then Telerik's Fiddler is an option. It has a composer option to replay http requests, and it's free.
Microsoft Chromium-based Edge supports "Edit and Replay" requests in the Network Tab as an experimental feature:
In order to enable the option you have to "Enable Experimental Features".
Control+Shift+I (Windows, Linux) or Command+Option+I (macOS)
and tick the checkbox next to "Enable Network Console".
More details about how to Enable Experimental Tools and the feature can be found here
For Firefox the problem solved itself. It has the "Edit and Resend" feature implemented.
For Chrome Tamper extension seems to do the trick.
Awesome Requestly
Intercept & Modify HTTP Requests
https://chrome.google.com/webstore/detail/requestly-modify-headers/mdnleldcmiljblolnjhpnblkcekpdkpa
https://requestly.io/
5 years have passed and this essential requirement didn't get ignored by the Chrome devs.
While they offer no method to edit the data like in Firefox, they offer a full XHR replay.
This allows to debug ajax calls.
"Replay XHR" will repeat the entire transmission.
There are a few ways to do this, as mentioned above, but in my experience the best way to manipulate an XHR request and resend is to use chrome dev tools to copy the request as cURL request (right click on the request in the network tab) and to simply import into the Postman app (giant import button in the top left).
No need to install 3rd party extensions!
There exists the javascript-snippet, which you can add as browser-bookmark and then activate on any site to track & modify the requests. It looks like:
For further instructions, review the github page.

Access-Control-Allow-Origin error in a chrome extension

I have a chrome extension which monitors the browser in a special way, sending some data to a web-server. In the current configuration this is the localhost. So the content script contains a code like this:
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function(data)...
xhr.open('GET', url, true);
xhr.send();
where url parameter is 'http://localhost/ctrl?params' (or http://127.0.0.1/ctrl?params - it doesn't matter).
Manifest-file contains all necessary permissions for cross-site requests.
The extension works fine on most sites, but on one site I get the error:
XMLHttpRequest cannot load http://localhost/ctrl?params. Origin http://www.thissite.com is not allowed by Access-Control-Allow-Origin.
I've tried several permissions which are proposed here (*://*/*, http://*/*, and <all_urls>), but no one helped to solve the problem.
So, the question is what can be wrong with this specific site (apparently there may be another sites with similar misbehaviour, and I'd like to know the nature of this), and how to fix the error?
(tl;dr: see two possible workarounds at the end of the answer)
This is the series of events that happens, which leads to the behavior that you see:
http://www.wix.com/ begins to load
It has a <script> tag that asynchronously loads the Facebook Connect script:
(function() {
var e = document.createElement('script');
e.type = 'text/javascript';
e.src = document.location.protocol +
'//connect.facebook.net/en_US/all.js';
e.async = true;
document.getElementById('fb-root').appendChild(e);
}());
Once the HTML (but not resources, including the Facebook Connect script) of the wix.com page loads, the DOMContentLoaded event fires. Since your content script uses "run_at" : "document_end", it gets injected and run at this time.
Your content script runs the following code (as best as I can tell, it wants to do the bulk of its work after the load event fires):
window.onload = function() {
// code that eventually does the cross-origin XMLHttpRequest
};
The Facebook Connect script loads, and it has its own load event handler, which it adds with this snippet:
(function() {
var oldonload=window.onload;
window.onload=function(){
// Run new onload code
if(oldonload) {
if(typeof oldonload=='string') {
eval(oldonload);
} else {
oldonload();
}
}
};
})();
(this is the first key part) Since your script set the onload property, oldonload is your script's load handler.
Eventually, all resources are loaded, and the load event handler fires.
Facebook Connect's load handler is run, which run its own code, and then invokes oldonload. (this is the second key part) Since the page is invoking your load handler, it's not running it in your script's isolated world, but in the page's "main world". Only the script's isolated world has cross-origin XMLHttpRequest access, so the request fails.
To see a simplified test case of this, see this page (which mimics http://www.wix.com), which loads this script (which mimics Facebook Connect). I've also put up simplified versions of the content script and extension manifest.
The fact that your load handler ends up running in the "main world" is most likely a manifestation of Chrome bug 87520 (the bug has security implications, so you might not be able to see it).
There are two ways to work around this:
Instead of using "run_at" : "document_end" and a load event handler, you can use the default running time (document_idle, after the document loads) and just have your code run inline.
Instead of adding your load event handler by setting the window.onload property, use window.addEventListener('load', func). That way your event handler will not be visible to the Facebook Connect, so it'll get run in the content script's isolated world.
The access control origin issue you're seeing is likely manifest in the headers for the response (out of your control), rather than the request (under your control).
Access-Control-Allow-Origin is a policy for CORS, set in the header. Using PHP, for example, you use a set of headers like the following to enable CORS:
header('Access-Control-Allow-Origin: http://blah.com');
header('Access-Control-Allow-Credentials: true' );
header('Access-Control-Allow-Headers: Content-Type, Content-Disposition, attachment');
If sounds like that if the server is setting a specific origin in this header, then your Chrome extension is following the directive to allow cross-domain (POST?) requests from only that domain.