When I check my website with W3C link checker, The checker shows:
Status: 403 Forbidden
"The link is forbidden! This needs fixing. Usual suspects: a missing index.html or Overview.html, or a missing ACL"
But that link is working fine. How can I resolve this issue from that link checker?
Screenshot:
That issue is usually used by over-zealous security filtering which either assumes that all bots are evil or that all bots using certain libraries are evil.
The checklink software uses the LWP library which often finds its way onto such blacklists. The two ways to get checklink to pass the link are:
Download checklink, edit its user-agent string, then run it from your local system and not the W3C servers. (See also: installing CPAN modules.)
Change the security filtering on the server the link is pointing to (this obviously requires that you have access to that server)
Alternatively, you can check the links manually each time you perform a check.
Its not always possible to follow all the standards.
If you have tested that the link is working fine than go ahead.
Related
I'm unable to understand the meaning of this error which I found in my console in the Edge browser:
Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'interest-cohort'.
Your guidance is much appreciated.
I can't see any errors in my HTML file, and I don't use Google analytics.
This has to do with FLOC. Or Federated Learning of Cohorts. It's an old experiment that Google ran to try to do away with 3rd party cookies. Some code in your website is blocking Floc from running. Unblocking is fairly straight forward to get rid of the warning. But you have to make the decision if you want to unblock floc or leave as is. You will only see this issue in Chrome base browsers. We need to know what type of website you are running? Drupal? Wordpress? Versions?
I'm getting this chrome flag when trying to post and then get a simple form.
The problem is that the Developer Console shows nothing about this and I cannot find the source of the problem by myself.
Is there any option for looking this at more detail?
View the piece of code triggering the error for fixing it...
The simple way for bypass this error in developing is send header to browser
Put the header before send data to browser.
In php you can send this header for bypass this error ,send header reference:
header('X-XSS-Protection:0');
In the ASP.net you can send this header and send header reference:
HttpContext.Response.AddHeader("X-XSS-Protection","0");
or
HttpContext.Current.Response.AddHeader("X-XSS-Protection","0");
In the nodejs send header, send header reference :
res.writeHead(200, {'X-XSS-Protection':0 });
// or express js
res.set('X-XSS-Protection', 0);
Chrome v58 might or might not fix your issue... It really depends to what you're actually POSTing. For example, if you're trying to POST some raw HTML/XML data whithin an input/select/textarea element, your request might still be blocked from the auditor.
In the past few days I hit this issue in two different scenarios: a WYSIWYG client-side editor and an interactive upload form featuring some kind of content preview. I managed to fix them both by base64-encoding the raw HTML before POSTing it, then decoding it on the receiving PHP page. This will most likely fix the issue and, most importantly, increase the developer's awareness level regarding the data coming from POST requests, hopefully pushing him into adopting effective data encoding/decoding strategies and strengthen their web application from XSS-type attacks.
To base64-encode your content on the client side you can either use the native btoa() function, which is supported by most browsers nowadays, or a third-party alternative such as a jQuery plugin (I ended up using this, which worked ok).
To base64-decode the POST data you can then use PHP's base64_decode(str) function, ASP.NET's Convert.FromBase64String(str) or anything else (depending on your server-side scenario).
For further info, check out this blog post that I wrote on the topic.
In this case, being a first-time contributor at the Creative forums, (some kind of vBulletin construct) and reduced to posting a PM to the moderators before forum access it is easy for one to encapsulate the nature of the issue from the more popular answers above.
The command was
http://forums.creative.com/private.php?do=insertpm&pmid=
And as described above the actual data was "raw HTML/XML data within an input/select/textarea element".
The general requirement for handling such a bug (or feature) at the user end is some kind of quick fixit tweak or twiddle. This post discusses the option of clearing cache, resetting Chrome settings, creating a new_user or retrying the operation with a new beta release.
It was also suggested that one launches a new instance with the following:
google-chrome-stable --disable-xss-auditor
The launch actually worked in this W10 1703 Chrome 061 edition after this modified version:
chrome --disable-xss-auditor
However, on logging back in to the site and attempting the post again, the same error was generated. Perhaps the syntax wants refining or something else is awry.
It then seemed reasonable to launched Edge and repost from there, which turned out to be no problem at all.
This may help in some circumstances. Modify Apache httpd.conf file and add
ResponseHeader set X-XSS-Protection 0
It may have been fixed in Version 58.0.3029.110 (64-bit).
I've noticed that if there is an apostrophe ' in the text Chrome will block it.
When I update href from javascript:void(0) to # in the page of POST request, it works.
For example:
login
Change to:
login
I solved the problem!
In my case when I make the submmit, I send the HTML to the action and in the model I had a property that accept the HTML with "AllowHTML".
The solution consist in remove this "AllowHTML" property and everything go OK!
Obviously I no longer send the HTML to the action because in my case I do not need it
It is a Chrome bug. The only remedy is to use FireFox until they fix this Chrome bug. XSS auditor trashing a page, that has worked fine for 20 years, seems to be a symptom, not a cause.
I recently encountered a web page containing the following line of markup:
<script src="resource://ember-inspector-at-emberjs-dot-com/ember-inspector/data/in-page-script.js"></script>
Note that the scheme in the URL is 'resource' and that the URL is not for something that can be reached over the Internet.
This is not a URL scheme that I have previously encountered. Despite some searching on the matter, I can't find any information regarding the use of this scheme.
What is the purpose of the 'resource' scheme? If I were a browser, what would I do with this?
The resource: URI scheme is exclusive to Firefox and was registered with Firefox v3.
It's used internally, related to chrome.manifest.
In Firefox enter this in the address bar and navigate to it..
resource:///
You should find a directory structure of your local Firefox user profile.
Background
Mozilla has multiple URI-schemes registered. Of these include resource: and chrome: (the latter, being more commonly familiar)
A Chrome directory is an important part of any Firefox installation. Inside the Chrome directory there are data files, documents, scripts, images, etc.. all of these files comprise the user interface elements and local user data.
But a chrome:// URI is actually just a special case of the lesser known resource:// URI which points to the top of the platform installation area. All paths in the chrome directory must begin with resource: or jar:
Info found in Rapid Application Development with Mozilla written by Nigel McFarlane
Specific use-case.. Emberjs
For the specific case you referred to, you can find more details here: https://github.com/emberjs/ember-inspector/issues/82
Issues
We allowed accessibility for resource:/// which pointed at the
installed on-disk resources that came with Firefox. I don't know if we
supported alternate resource aliases at the time, but I'm sure add-ons
weren't using them and that we didn't support resource aliasing in
chrome.manifest (which didn't exist).
When we introduced resource into chrome.manifest we should have added
the option contentaccessible=yes mechanism at the same time: let
add-ons opt-in to fingerprintability just as we do with chrome
content. Unfortunately anything we do may have compatibility problems:
searching addon source I find 810 chrome.manifest files that define
custom resource:// locations. One reason for so many is because it's
used by JetPack addons so I'm somewhat hopeful that most of those
don't need to reference these from content.
Quoted from Reference 2 below.
The only reason extensions would need to use resource: is to make things available to web content.
Quoted from Reference 2 below.
Directly from Mozilla
I had a really hard time finding any mention of resource:// in any documentation by Mozilla, IANA, or W3C. This is the one and ONLY direct mention of the definition of resource: that I could find published directly from Mozilla. It was so obscure I took a screenshot :)
Further Reading:
Bugzilla report on resource:// security vulnerability
Another Bugzilla report (source of quote above)
IANA resource:// Resource Identifier provision
IANA Complete List of URI-Scheme Assignments
On an intranet site, let's say I want to link to a file on a share using UNC, at:
\\servername\foldername\filename.rtf
It seems the correct way to do this is with markup like this:
filename.rtf
That's five slashes - two for the protocol, one to indicate the root of the file system, then two more to indicate the start of the server name.
This works fine in IE7, but in Firefox 3.6 it will only work if the html is from a local file. I can't get it to work when the file comes from a web server. The link is "dead" - clicking on it does nothing.
Is there a workaround for this in Firefox? Those two browsers should be all I need to worry about for now.
Since this is obviously a feature of Firefox, not a bug, can someone explain what the benefit is to preventing this type of link?
This question has been asked at least twice before, but I was unable to find those posts before posting my own (sorry):
Open a direct file on the hard drive from firefox (file:///)
Firefox Links to local or network pages do not work
Here is a summary of answers from all three posts:
Use WebDAV — this is the best solution for me, although much more involved than I had anticipated.
Use http:// instead of file:///// — this will serve up a copy of the document that the user cannot edit and save.
Edit user.js on the client as described here — this worked for me in Firefox 3.6.15, but without access to client machines, it's not a solution.
In Firefox, use about:config, change the Security.fileuri.strict_origin_policy setting to false — this doesn't work for me in 3.6.15. Other users on [SO] have also reported that it doesn't work.
Use the locallinks Firefox extension — this sets the Security.fileuri.strict_origin_policy to true for you, and appears to have no other effect.
Read the file server-side and send it as the response — this presents the same problem as simply configuring your web server to use http://.
Browsers like Firefox refuse to open the file:// link when the parent HTML page itself is served using a different protocol like http://.
Your best bet is to configure your webserver to provide the network mapped file as a web resource so that it can be accessed by http:// from the same server instead of by file://.
Since it's unclear which webserver you're using, I can't go in detail as to how to achieve this.
In Firefox to Open File:\\\\\yourFileServer\docs\doc.txt for example you need to turn on some options in Firefox configuration:
user_pref("capability.policy.policynames", "localfilelinks");
user_pref("capability.policy.localfilelinks.sites", "http://yourServer1.companyname.com http://yourServer2.companyname.com");
user_pref("capability.policy.localfilelinks.checkloaduri.enabled", "allAccess");
As it turns out, I was unaware that Firefox had this limitation/feature. I can sympathize with the feature, as it prevents a user from unwittingly accessing the local file system. Fortunately, there are useful alternatives that can provide a similar user experience while sticking to the HTTP protocol.
One alternative to accessing content via UNC paths is to publish your content using the WebDAV protocol. Some content managements systems, such as MS SharePoint, use WebDAV to provide access to documents and pages. As far as the end-user experience is concerned, it looks and feels just like accessing network files with a UNC path; however, all file interactions are performed over HTTP.
It might require a modest change in your file access philosophy, so I suggest you read about the WebDAV protocol, configuration, and permission management as it relates to your specific server technology.
Here are a few links that may be helpful if you are interested in learning more about configuring and using WebDAV on a few leading HTTP servers:
Apache Module mod_dav
IIS 7.0 WebDAV Extension
Configuring WebDAV Server in IIS 7, 6, 5
Add your own policy, open configuration "about:config" in the location bar and add three new entries:
capability.policy.policynames MyPolicy
capability.policy.MyPolicy.sites http://localhost
capability.policy.MyPolicy.checkloaduri.enabled allAccess
Replace http://localhost with your website.
Works with Firefox 70.0.
I don't know if this will work, but give it a shot! Old article, but potentially still useful.
http://www.techlifeweb.com/firefox/2006/07/how-to-open-file-links-in-firefox-15.html
I have got two simple questions
How can I tell what server is a website on? I remember I used to read the HTTP Host Header to identify the type of server. Is there any tool to do it?
2a. A lot of the website have the page extension .html and you just know they are not html. How can I tell what programming language is behind them?
2b. For ASPX, I think IIS can map the extension, so it will show HTML instead of ASPX, right?
Cheers
1.
Yes, you can check the http header tag "SERVER". Example of responses:
-Microsoft-IIS/6.0
-GFE/1.3
-Server Apache/2.2.11 (Ubuntu) PHP/5.2.6-3ubuntu4.2 with Suhosin-Patch
You can also check "X-Powered-By" on some servers, example:
-PHP/5.2.6-3ubuntu4.2
-ASP.NET
You can do this in firefox/firebug for example. Go to NET pick a request, select headers and check under response headers. You could do this is Fiddler to or any other http sniffer.
2a)
See my first answer
2b)
Yes you can map .html or anything as a "asp.net" extension, meaning that the extension will be handled by the web application. Common use is that you have a httphandler that catches that extension in web.config.
Not sure what your endgoal of these questions are.. or rather to what purpose, maybe we could answer better then.
Look at the HTTP headers. This works as long as the Server admin hasn't disabled them (which he usually doesn't).
Try http://kalender-365.de/ip/get-http-header.php
2a. This actually works with all servers and all extensions. Some Interpreters - such as e.g. PHP - send a special created-by HTTP header (which can be disabled, however).