ERR_BLOCKED_BY_XSS_AUDITOR when downloading file using selenium - html

I'm trying to download a file using selenium by simulating click on a download button but Chrome reports ERR_BLOCKED_BY_XSS_AUDITOR. If I use the "--disable-xss-auditor" argument to bypass, the page would be reloaded and nothing get downloaded. What seems strange to me is that when I actually download the file with my mouse in a Chrome session that's even controlled by selenium, the file downloads well.
Please help me understand what xss auditor does? Why can't I download the file with selenium?
BTW, I'm using python if it matters.
Thanks

X-XSS-Protection
The HTTP X-XSS-Protection response header is a feature of Internet Explorer, Chrome and Safari that stops pages from loading when they detect reflected cross-site scripting (XSS) attacks. Although these protections are largely unnecessary in modern browsers when sites implement a strong Content-Security-Policy that disables the use of inline JavaScript ('unsafe-inline'), they can still provide protections for users of older web browsers that don't yet support CSP.
Header type Response header
----------- ---------------
Forbidden header name no
Syntax
X-XSS-Protection: 0: Disables XSS filtering.
X-XSS-Protection: 1: Enables XSS filtering (usually default in browsers). If a cross-site scripting attack is detected, the browser will sanitize the page (remove the unsafe parts).
X-XSS-Protection: 1: mode=block Enables XSS filtering. Rather than sanitizing the page, the browser will prevent rendering of the page if an attack is detected.
X-XSS-Protection: 1: report= (Chromium only) Enables XSS filtering. If a cross-site scripting attack is detected, the browser will sanitize the page and report the violation. This uses the functionality of the CSP report-uri directive to send a report.
Background
As per Intent to Ship: Changes to the XSS Auditor Chromium team made two changes:
Change the default behavior to X-XSS-Protection: 1; mode=block, which blocks the page load by navigating to a unique origin when XSS is detected, rather than filtering out specific scripts.
Deprecate the filter mode, with the intent to remove it completely at some future date.
Implementation Status
XSS Auditor blocks by default: Chrome's XSS Auditor should block pages by default, rather than filtering out suspected reflected XSS. Moreover, we should remove the filtering option, as breaking specific pieces of page's script has been an XSS vector itself in the past.
As per XSS Auditor: Block by default, remove filtering this issue was discussed and a fix was attempted. Some more discussion happened in False positives with ERR_BLOCKED_BY_XSS_AUDITOR and finally in ERR_BLOCKED_BY_XSS_AUDITOR on bona fide site when posting to a forum Chromium team decided Status: WontFix
Solution
You need to induce WebDriverWait for the desired element to be clickable. Here are some examples of the WebDriverWait implementation:
Java:
new WebDriverWait(driver, 20).until(ExpectedConditions.elementToBeClickable(By.linkText("text_within_the _link"))).click();
Python:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.LINK_TEXT, "text_within_the _link"))).click()
C#:
new WebDriverWait(driver, TimeSpan.FromSeconds(10)).Until(ExpectedConditions.ElementToBeClickable(By.LinkText("text_within_the _link"))).Click();
Reference
Event 1046 - Cross-Site Scripting Filter
The misunderstood X-XSS-Protection

I slowed down the clicks (2 clicks needed to download, added a sleep between them) and it works! Have no idea what happened...

XSS Auditor is a built-in function of Chrome and Safari which is designed to mitigate Cross-site Scripting (XSS) attacks. It aims to identify if query parameters contain malicious JavaScript and block the response if it believes the payloads were injected into the server response.
XSS is a vulnerability that occurs when the data get (mis)interpreted as code and executed on a victim's browser. The idea is to use a headless browser like Selenium WebDriver, and inject XSS payloads along with functional and user interaction tests
Python don't have anything to do with that, I think that might be the chrome version or something
i have shared the link which will help you understand better.
Chrome: ERR_BLOCKED_BY_XSS_AUDITOR details

Related

chrome blocking the cookies even with samesite=None

I have a flask application hosted in heroku embedded as an iframe to one of my website.
Let's say a.com renders this <heroku_url>.com as an iframe.
When user visits a.com, <heroku_url>.com is rendered and session is created.
from flask import session, make_response
#app.route("/")
def index():
session['foo'] = 'bar'
response = make_response("setting cookie")
response.headers.add('Set-Cookie', 'cross-site-cookie=bar; SameSite=None; Secure')
return response
In Chrome dev tools, I see the cookie getting blocked. Works fine in firefox though.
Am I setting the cookie properly?
I understand this is due to chrome80 update, but not sure about the workaround
Setting samesite attribute in the session cookie to None seems to have solved the problem.
Had to update werkzeug (WSGI web application library which is wrapped by flask) and update the session cookie.
i.e
app.config['SESSION_COOKIE_SAMESITE'] = 'None'
app.config['SESSION_COOKIE_SECURE'] = True
However, this also depends on the user's preference in 'chrome://settings/cookies'.
Chrome will block the session cookies even if samesite is set to None if one of the below options is selected
Block third-party cookies
Block all cookies
Block third-party cookies in Incognito (blocks in incognito mode).
You can check your browser is treating the cookies as expected by checking the test site at https://samesite-sandbox.glitch.me/
If all the rows contain green checks (✔️) then there it's likely there is some kind of issue with the cookie and I would suggest checking the Issues tab and Network tab in DevTools to confirm the set-cookie header definitely contains what it should.
If there are any red or orange crosses (✘) on the test site, then something in your browser is affecting cookies. Check that you are not blocking third-party cookies (chrome://settings/cookies) or running an extension that may do something similar.

How to get the Request Headers using the Chrome Devtool Protocol

The new chrome versions 72+ does not send the requestHeaders .
there was a solution:
DevTools Protocol network inspection is located quite high in the network stack. This architecture doesn't let us collect all the headers that are added to the requests. So the ones we report in Network.requestWillBeSent and Network.requestIntercepted are not complete; this will stay like this for the foreseeable future.
There are a few ways to get real request headers:
• the crude one is to use proxy
• the more elegant one is to rely on Network.responseReceived DevTools protocol event. The actual headers are reported there as requestHeaders field in the Network.Response.
This worked fine with the old chromes but not with the last versions. here is a small summery I made for the versions a coulded test
a solution for chrome v67 was to add this flags to disable Site Isolation :
chrome --disable-site-isolation-trials --disable-features=IsolateOrigins,site-per-process --disable-web-security
Now all of this does not work with the last chrome v73
maybe it is caused by this:
Issue 932674: v72 broke devtools request interception inside cross-domain iframes
you can use Fetch protocol domain that is available since m74
the solution gaven does not work neither, the Fetch.requestPaused does not contain the request headers...
I found some info that maybe causes that:
DevTools: do not expose raw headers for cross-origin requests
DevTools: do not report raw headers and cookies for protected subresources. In case subresource request's site needs to have its document protected, don't send raw headers and cookies into the frame's renderer.
or it is caused when it is an HTTP/2 server?
Does the HTTP/2 header frame factor into a response’s encodedDataLength? (Remote Debugging Protocol)
...headersText is undefined for HTTP/2 requests
link
1- How can I get the Request Headers using the Chrome Devtool Protocol with chrome v73+?
2- Can a webextension solve that?
3- Is there another way which will be stable and last longuer? like tshark+sslkeylogfile which I'm attempting to avoid. thank you

Displaying content of 302 Redirect - or HTTP-compliant waiting screen

I'd like to have a waiting screen in HTML drawn before user enters the site because of some long-running auth processes that are out of my control. I'd like that screen to be fully HTTP-compliant, i.e.:
it should not respond with 200 OK if there's no actual content yet available (which eliminates an option to display an empty page placeholder with loading indicator and load the content using AJAX call in the background)
it should respond with 302 Redirect if there is actually any redirection (that eliminates HTML's Meta Refresh feature).
The only "third way" I can see is to rely on standard 302 redirections. But I'll need the actual content of the request that resulted in 302 response to be rendered to the user for the time of waiting for the second request (with "please wait" info or something). In most (all?) cases, browsers do not draw the content of those requests, just go and wait for the data from redirection.
The questions are:
in what cases the content of 302 Redirect request is rendered? is it possible to force browsers to render it before redirecting?
is there another way to approach the problem without breaking the HTTP protocol? or is 200 OK status for the page without any content initially not breaking the protocol?
Short answer to your question might be "not practically possible".
But you might be lucky.
Displaying content of 302 - not possible
Specification requires that client immediately goes to the new url, specified in the 302 response. No way to wait, now notion of a response to be rendered.
Someone must be waiting - server side option
You are expecting that some process shall wait until the lengthy authentication process is finished.
Such a process must be able to
1) initiate the authentication process
2) render some "waiting" page
3) check result of the authentication process
4) after the authentication succeeds, redirect you there
Step 1 - initiating the authentication process from server side can be real problem as you will not have access to cookies and other authentication resources for target site. You are serving another domain, so you will not have a chance to read this security related stuff, which only your browser knows. But I will assume, you will manage somehow (asking your user to tell you this sort of information earlier).
Then your web server would start authentication process for your client. This is rather strange, but you might try to do so by initiating an http request to the target site from your server. This must be done asynchronously as we need to do also some other things like providing your user something to render.
Step 2 - rendering some "waiting page.
To render something on browser side, you may return a page with 200 status code. I do not think, this is breaking http. You would return some nice "wait a moment" content plus add "Refresh" header to initiate page refresh within short time (like in 2 seconds or so).
Step 3 - check status of authentication process: your server will get another request resulting from Refresh form previous step. Your server must know context of this activity , probably by session id. In this context, it will find, that there is a process of authentication running. If authentication is not yet completed, repeat Step 2.
Step 4 - (still being on your server) If authentication is complete, gather needed information needed for your client to connect as authenticated user to target server, and return 302 with a link leading to target server. This assumes, the link allows to connect in "authenticated" manner.
This approach is likely to fail in steps 1 and/or 4. But there might be situations it would work (depends on target server).
Waiting process running within your client browser
Another option for the process waiting for complete authentication is within your browser. Using AJAX process is not breaking HTTP, it is just another process running in parallel.
You could render some "waiting" content and then try to connect by AJAX to the target server.
However, here you are trying to do sort of cross-site scripting, so unless the target server does not allow you to make such a request (search for CORS, your web browser is running in context of another domain), web browser will reject such a request.
Assuming you succeed, your AJAX process will try to connect, and as soon as it succeeds, it will manage redirecting the page to the target one.
Your server in role of proxy
You might modify the first proposed solution - "server side option", by taking over all the communication with target server and providing similar content to to your client.
Conclusions
Clarify roles of browser, your server and target server
It would be great, if you draw Sequence diagram with lifelines "Browser", "MyServer", "TargetServer".
All the proposed solutions are dangerous
The biggest problem is, that if you want to get authenticated for other domain, than the one you serve, you are asking your user to share with you very private information. Browsers will do their best to prevent such a behaviour, your user might be willing to share such information with your app, but such behaviour is very tricky.
Trying to solve a problem out of your reach is very tricky
To me it sounds, you are trying somehow resolving a slow authentication process on a domain, you do not have control of. This often leads to very desperate situation "I do not have enough power to do it, but I have to". Good arguments for rejecting such a requirement could be "It would require breaking few security related standards."
The easiest solution would be to use a multipart/x-mixed-replace content type response with the first part served being the temporary waiting page and the secondary part the actual final content.
This content can be correctly served with a 200 OK.
The initial response should look like this and should be flushed as soon as possible to the user
while keeping the HTTP connection open
HTTP/1.1 200 OK
Content-Type: multipart/x-mixed-replace;boundary=MYBOUNDARYSTRING
--MYBOUNDARYSTRING
Content-Type: text/html
<html><body><!-- your waiting page content here --></body></html>
--MYBOUNDARYSTRING
when your final page is ready, you just have to serve a second HTML part that will be displayed
instead of the first one
Content-Type: text/html
<html><body><!-- your final page here --></body></html>
--MYBOUNDARYSTRING
--
The final '--' indicates to the browser that the content is complete and that there are no additional parts.
Most modern browsers will correctly replace the first content part with the second when it's available but this is user agent specific and older versions of Internet Explorer are known not to handle this MIME type correctly
Alternate solution
You can implement the same technique of chunked response with a single HTML document and rely on Javascript and/or CSS to visually substitute content for the user.
For example:
The initial response is flushed immediately:
HTTP/1.1 200 OK
Content-Type: text/html
Transfer-Encoding: chunked
6E
<html><head><!-- load my CSS and JS files --></head><body>
<div id="waiting"><!-- my waiting content --></div>
when your final content is ready, send a new chunk:
3E
<div id="final"><!-- my final content --></div></body></html>
0
To ensure that your #final div is displayed instead of #waiting, you can either:
add something like <script>$("#waiting").hide();</script> just before your closing </body> tag
or use CSS positioning to ensure that #final is displayed above #waiting, something like
<style>
body { position: relative; margin: 0; padding: 0 }
body > div { position: absolute; top: 0; left: 0; width: 100% }
#waiting { z-index: 0 }
#final { z-index: 1 }
</style>

Chrome: Basic Auth image requests return 401, but not when called directly

Chrome throws 401 restricted errors for Basic auth'd IMG-urls in the vein of http://user:pass#path.to/image, but did not do this before with the same resources and code. When called directly, as in "open in new tab" the images load without a hitch. I'm puzzled. Any ideas?
From Chromium issue:
cbentzel#chromium.org wrote:
This is an intentional change - there are lots of security/compatibility tradeoffs that we have to make when tightening up browser security and in this case the decision was made that the compatibility risk was worth it

Chrome extension to listen and capture streaming audio

Is it possible for a Chrome extension to listen for streaming audio from any of the browser's tabs? I would like to capture the streaming audio data and then analyse it.
Thanks
You could try 3 ways, neither one does provide 100% guarantee to meet your needs.
Before going into more detailed descriptions, I must note that Chrome extensions do not provide convenient tools for working on per connection level - sufficiently low level, required for stream capturing. This is by design. This is why the 1-st way is:
To look at other browsers, for example Firefox, which provides low-level APIs for connections. They are already known to be used by similar extensions. You may have a look at MediaStealer. If you do not have a specific requirement to build your system on Chrome, you should possibly move to Firefox.
You can develop a Chrome extension, which intercepts HTTP-requests by means of webRequest API, analyses their headers and extracts media urls (such as containing audio/mpeg MIME-type, for example, in HTTP-headers). Just for a quick example of code you make look at the following SO question - How to change response header in Chrome. Having the url you may force appropriate media download as a file. It will land in default downloads folder and may have unfriendly name. (I made such an extension, but I do not have requirements for further processing). If you need to further process such files, it can be a challenge to monitor them in the folder, and run additional analysis in a separate program.
You may have a look at NPAPI plugins in general, and their streaming APIs in particular. I can imagine that you create a plugin registered for, again, audio/mpeg MIME-type, and receives the data via NPP_NewStream, NPP_WriteReady and NPP_Write methods. The plugin can be wrapped into a Chrome extension. Though I made NPAPI plugins, I never used this API, and I'm not sure it will work as expected. Nethertheless, I'm mentioning this possibility here for completenees. This method requires some coding other than web-coding, meaning C/C++. NB. NPAPI plugins are deprecated and not supported in Chrome since September 2015.
Taking into account that you have some external (to the extension) "fingerprinting service" in mind, which sounds like an intelligent data processing, you may be interested in building all the system out of a browser. For example, you could, possibly, involve a HTTP-proxy, saving media from passing traffic.
If you're writing a Chrome extension, you can use the Chrome tabCapture API to record audio.
chrome.tabCapture.capture({audio: true}, function(stream) {
var recorder = new MediaRecorder(stream);
[...]
});
The rest is left as an exercise to the reader; MDN has more documentation on how to use MediaRecorder.
When this question was asked in 2013, neither chrome.tabCapture nor MediaRecorder existed.
Mac OSX solution using soundflower: http://rogueamoeba.com/freebies/soundflower/
After installing soundflower it should appear as a separate audio device in the sound preferences (apple > system preferences > sound). Divert the computer's audio to the 2ch option (stereo, 16ch is surround), then inside a DAW, such as 'audacity', set the audio input as soundflower. Now the sound should be channeled to your DAW ready for recording.
Note: having diverted the audio from the internal speakers to soundflower you will only be able to hear the audio if the 'soundflowerbed' app is actually open. You know it's open if there's a 8 legged blob in the top right task bar. Clicking this icon gives you the sound flower options.
My privoxy has the following log:
2013-08-28 18:25:27.953 00002f44 Request: api.audioaddict.com/v1/di/listener_sessions.jsonp?_method=POST&callback=_AudioAddict_WP_ListenerSession_create&listener_session%5Bid%5D=null&listener_session%5Bis_premium%5D=false&listener_session%5Bmember_id%5D=null&listener_session%5Bdevice_id%5D=6&listener_session%5Bchannel_id%5D=178&listener_session%5Bstream_set_key%5D=webplayer&_=1377699927926
2013-08-28 18:25:27.969 0000268c Request: api.audioaddict.com/v1/ping.jsonp?callback=_AudioAddict_WP_Ping__ping&_=1377699927928
2013-08-28 18:25:27.985 00002d48 Request: api.audioaddict.com/v1/di/track_history/channel/178.jsonp?callback=_AudioAddict_TrackHistory_Channel&_=1377699927942
2013-08-28 18:25:54.080 00003360 Request: pub7.di.fm/di_progressivepsy_aac?type=.flv
So I got the stream url and record it:
D:\Profiles\user\temp>wget pub7.di.fm/di_progressivepsy_aac?type=.flv
--18:26:32-- http://pub7.di.fm/di_progressivepsy_aac?type=.flv
=> `di_progressivepsy_aac#type=.flv'
Resolving pub7.di.fm... done.
Connecting to pub7.di.fm[67.221.255.50]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [video/x-flv]
[ <=> ] 1,234,151 8.96K/s
I got the file that can be reproduced in any multimedia pleer.