Really strange permissions issue on page titles with slashes - mediawiki

I'm using:
mediawiki 1.34
php 7.4
apache 2.4.46
mysql 8.0.22
The problem that I'm having is that I get permissions denied errors from the api when I try to edit, move, or delete pages that have a "/" in its title. This happens through the UI and in some cases things just fail silently and I had to open up the browser's dev tools to look at the network tab.
This has been happening for a while now and we've managed to just not use "/" in any of the page titles, but recently the Translate extension was installed and has been causing issues because translations are stored in pages like title/segment/lang.
Saving a translation POSTs this data:
action=edit&
format=json&
title=Translations%3ATest%2FPage+display+title%2Fes&
text=asdasdsasd&
summary=&
assert=user&
token=XXXX19%2B%5C
and it returns this from api.php:
{
"error":
{
"code":"permissiondenied",
"info":"You are not allowed to execute the action you have requested.",
"*":"See https://localhost/api.php for API usage. Subscribe to the mediawiki-api-announce mailing list at <https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce> for notice of API deprecations and breaking changes."
}
}
The user definitely has all of the translation permissions set, this happens even if I set all translate extension permissions to true for "*".
Does anyone possibly know what could be causing these issues? Or if there's some way to debug this to isolate what permission and where it's being checked? It looks like everything else works fine. I don't have very much experience with mediawiki, someone else used to manage it but they've left.

Try adding AllowEncodedSlashes NoDecode to your Apache config, and nocanon to the ProxyPass directive in it.

Related

Chrome Invalid SSL Certificate Security Warning

I ran into an interesting problem today using Chrome and I'm hoping there is a better way to fix it than what I ended up doing.
The issue starts with an invalid SSL certificate on a site that I'm configuring. In Chrome it's possible to advance past this screen using a link which adds a security exception for the current domain so that you don't have to view this warning message again.
It's also possible to clear this warning by going to the site with the exception then clicking the Not secure text and choosing the Re-enable warnings option.
Now my problem, I have a couple different redirects in place on the site that will redirect my .com and .bank domains to the primary .net domain. While developing I added security exceptions for all three of these domains. This becomes and issue when testing that my SSL certificate is configured properly. I want to clear out Chrome's stored exception for the .com domain - but I cannot do so using the Re-enable warnings option because as soon as I arrive at the page Chrome sees that an exception is already stored and proceeds to load the page normally which then gets redirected to the .net domain. Because of this there is no point where I can actually clear out the bypassed security warning in Chrome...
The only way I've been able to find to clear out these exceptions is to use the Reset option in Chrome's settings, which is not something I want to do regularly. I'm wondering if there is a hidden settings page in Chrome that lists all of the bypassed security warnings so that I may clear them out individually.
To "Re-enable warnings" for all SSL warnings if you don't want to clear your history (or if you dont know all the exemptions you have in place), you can close Chrome and edit:
"C:\Users\USER\AppData\Local\Google\Chrome\User Data\Default\Preferences"
and set ssl_cert_decisions":{},"
Stored in the JSON-path:
profile > content_settings > exceptions > ssl_cert_decisions
Or you can change the decision_expiration_time of the specific exemption to be equal to the last_modified time
Example: "ssl_cert_decisions":{"https://expired.badssl.com:443,*":{"last_modified":"13235055329485008","setting":{"cert_exceptions_map":{"-201cgaDTf2DD6Cj0N6/tKvudkzDuRBA3GwKd8T9hE7mHhQ=":1},"decision_expiration_time":"13235055329485008","version":1}}}
you will have to clear the browsing data for that site, the easiest way I found to do this is (Ctrl+Shift+Del) to bring the clear browser data window up then set time range to 1 hour, choose browsing history only then click clear data. Hope this is useful.

Chrome: ERR_BLOCKED_BY_XSS_AUDITOR details

I'm getting this chrome flag when trying to post and then get a simple form.
The problem is that the Developer Console shows nothing about this and I cannot find the source of the problem by myself.
Is there any option for looking this at more detail?
View the piece of code triggering the error for fixing it...
The simple way for bypass this error in developing is send header to browser
Put the header before send data to browser.
In php you can send this header for bypass this error ,send header reference:
header('X-XSS-Protection:0');
In the ASP.net you can send this header and send header reference:
HttpContext.Response.AddHeader("X-XSS-Protection","0");
or
HttpContext.Current.Response.AddHeader("X-XSS-Protection","0");
In the nodejs send header, send header reference :
res.writeHead(200, {'X-XSS-Protection':0 });
// or express js
res.set('X-XSS-Protection', 0);
Chrome v58 might or might not fix your issue... It really depends to what you're actually POSTing. For example, if you're trying to POST some raw HTML/XML data whithin an input/select/textarea element, your request might still be blocked from the auditor.
In the past few days I hit this issue in two different scenarios: a WYSIWYG client-side editor and an interactive upload form featuring some kind of content preview. I managed to fix them both by base64-encoding the raw HTML before POSTing it, then decoding it on the receiving PHP page. This will most likely fix the issue and, most importantly, increase the developer's awareness level regarding the data coming from POST requests, hopefully pushing him into adopting effective data encoding/decoding strategies and strengthen their web application from XSS-type attacks.
To base64-encode your content on the client side you can either use the native btoa() function, which is supported by most browsers nowadays, or a third-party alternative such as a jQuery plugin (I ended up using this, which worked ok).
To base64-decode the POST data you can then use PHP's base64_decode(str) function, ASP.NET's Convert.FromBase64String(str) or anything else (depending on your server-side scenario).
For further info, check out this blog post that I wrote on the topic.
In this case, being a first-time contributor at the Creative forums, (some kind of vBulletin construct) and reduced to posting a PM to the moderators before forum access it is easy for one to encapsulate the nature of the issue from the more popular answers above.
The command was
http://forums.creative.com/private.php?do=insertpm&pmid=
And as described above the actual data was "raw HTML/XML data within an input/select/textarea element".
The general requirement for handling such a bug (or feature) at the user end is some kind of quick fixit tweak or twiddle. This post discusses the option of clearing cache, resetting Chrome settings, creating a new_user or retrying the operation with a new beta release.
It was also suggested that one launches a new instance with the following:
google-chrome-stable --disable-xss-auditor
The launch actually worked in this W10 1703 Chrome 061 edition after this modified version:
chrome --disable-xss-auditor
However, on logging back in to the site and attempting the post again, the same error was generated. Perhaps the syntax wants refining or something else is awry.
It then seemed reasonable to launched Edge and repost from there, which turned out to be no problem at all.
This may help in some circumstances. Modify Apache httpd.conf file and add
ResponseHeader set X-XSS-Protection 0
It may have been fixed in Version 58.0.3029.110 (64-bit).
I've noticed that if there is an apostrophe ' in the text Chrome will block it.
When I update href from javascript:void(0) to # in the page of POST request, it works.
For example:
login
Change to:
login
I solved the problem!
In my case when I make the submmit, I send the HTML to the action and in the model I had a property that accept the HTML with "AllowHTML".
The solution consist in remove this "AllowHTML" property and everything go OK!
Obviously I no longer send the HTML to the action because in my case I do not need it
It is a Chrome bug. The only remedy is to use FireFox until they fix this Chrome bug. XSS auditor trashing a page, that has worked fine for 20 years, seems to be a symptom, not a cause.

TFS 2015 Code Viewer Not Working in Google Chrome

I found the following issue here in stackoverflow however cannot comment as yet. I have a similar issue and wonder if there is anyone out there that has solved it.
https://stackoverflow.com/questions/40917501/tfs-2015-web-portal-code-viewer-not-working#
I am encountering similar here. In house TFS 2015, can't view code in the web portal using Google Chrome however IE is fine. I, however, am not using HTTPS so may be experiencing something slightly different.
When I do try to view a file in Chrome, the window where the code listing should be is simply blank. I did note too that the button for creating a new build definition appears to be indicating a broken image link.
This has not always been an issue. Around 4 months ago I could get the code view fine in Chrome and, to my knowledge as I have no access to the servers, nothing has changed apart from Chrome updates.
I've tried getting to previous versions of Chrome to no avail, though I wouldn't know which version I was on when this did work.
Interestingly, I have one or two .MD files around and these display perfectly well. They are simple text files. However when saved with .TXT extension (or anything else I've tried), they do not show. Curious.
Update
As you will see from the screenshot below, when selection on a file has been made, in this case a .SQL file, where I would expect the view to populate nothing at all appears.
As for the F12, I do get 5 of these:
Failed to load resource: net::ERR_CONNECTION_REFUSED
plus associated paths of course. We use Webroot internally here which has recently dropped in a Chrome extension however even when Webroot is disabled in its entirety (including removal of extension) I get the same behaviour.
All other Chrome extensions have been removed too at varying times to try to give a clean browser.
I have no other pop up blockers, ad blockers, etc installed on the workstation.
Problem solved thanks to the F12 key suggestion.
After some grovelling I was granted domain admin privs to have a dig around everything. It turns out that TFS was installed on ServerA with a URL port of 8080, this I knew from the original install and obviously the path I follow to get to my TFS web interface. What had also been done subsequently, with no consultation of the Dev user group, was that a second TFS application tier had been installed on ServerB, the port here was 8088.
I had not noticed the difference in path initially, assuming it was Chrome or workstation related. Anyway, I altered the port on ServerB to 8080 and everything jumped into life. I should not have made assumptions and should have paid more attention to the path in the error!
It seems the second application tier was set up on a non-production environment to allow senior Dev users access to the TFS Management Console rather than allowing them access to the original app tier which was on a production box. Our IT Operations just forgot to tell anyone.
Try to update your chrome to latest version of (55.0.2883.87 m (64-bit)).
Also clear the cache of chrome. I have also encountered similar issues. The solution is clear cache and connect to the web portal use another ID, then connect back use the original ID. I have no idea which one solved the problem. You could try both.
This problem should only be an individual phenomenon, since TFS2015 has been released for a long time.

SSRS URLs having issues

A little background info...
By far and large the URLs worked perfectly fine. Occasionally either my machine, or the server itself couldn't access the Web Service URL or the Report Manager URL. For the server a restart fixed this, for me I had to reset my winsock which never worked and ended up System Restoring to a working date.
When I say couldn't access I mean getting the "This Page Cannot Be Displayed" message, or the "Please turn on TLS 1.0 etc etc" message.
The last few days the issue is now widespread. Everyone was having issues gettings to the URLs even the server. I figured it may have been some windows updates causing issues so I removed all the updates around the timeframe in which it started and tested and got nothing.
Came back the next day (today) and same issue except the only way to access it is through a hyperlink thats clicked or copy/pasted.
The issue:
If you manually type the URL it will not work. You have to copy and paste the hyperlink from a working page. I used a link to a rendered report and deleted back to /ReportServer and it pulls up the directory. I've never seen something like this happen before.
The Solution:
Apparently you have to type in www. as well
I was so use to skipping that for most pages.
https://analytics.domain.com/ReportServer = fail
https://www.analytics.domain.com/ReportServer = win

chrome wont let me access localhost (it google searches instead)

How can I disable chrome using address bar for google search?
I cant access localhost at 0.0.0.0:6000, because chrome thinks it's a google search and not an url
any ideas?
This may be because a url was added to your search history. Google is doing you a favor because that's what you did once before somehow.
Try clearing your history to see if this is the case.
I went to "Clear Browsing Data" in Settings > Advanced and I was able to enter a localhost url again.
Be sure you end the url with "/" symbol.
Try:
http://0.0.0.0:6000/
Instead of:
0.0.0.0:6000
SOLUTION For chrome version 62:
settings > advanced > Use a web service to help resolve navigation errors > OFF
Try
- turn off Search Suggestions
- under LAN settings, uncheck "use automatic configuration"
Do you get the same behavior with an incognito window?
Try this alternative: http://127.1.1.1:6000 or 127.1.1.1:6000 (without http).
A search through the web has revealed several different possible causes to this problem as well as solutions.
Causes:
this is malware
this is a chrome/browser issue
this is a misconfiguration of the httpd service
Solutions:
scan for malware, though probably not the issue.
update the host file, so that localhost points to 127.0.0.1
confirm your httpd service is accepting http:// requests through port 80, 8080, or some other port.
make sure the URL has the correct port .. http://localhost:8080
try appending different delimiting characters [ / ? # ] to the URL ...
localhost/Dir/
localhost/Dir?p=x
localhost/Dir#123
does it happen on a specific directory?
localhost/Dir .. works
localhost/Dir2 .. goes to search.
This suggests that the httpd service's configuration may need to be fixed. or that an .htaccess file is the cause. If these problems can be ruled out, then it's a browser issue.
remove the autocomplete entry from the browsers search/address bar
in most cases, type the url, using the arrow keys - highlight the offending entry, press [DELETE] or [SHIFT}+[DELETE].
does your browser support turning off autocomplete or searching from the address bar?
.. chrome .. Disable Predictive Text
.. firefox .. Disable Predictive Text
open the CLI and issue [ >wget localhost/dir ]. Observe what is returned on a working vs non working directory.
Follow up:
This issue is especially annoying when working with .htaccess files and browser redirect statements in PHP, ASP, nodeJS, and JSP which redirect the browser to a specific URL, but instead takes the user to a search page. If at all possible pre-pend http:// and post-pend a slash (/) after the URL in redirects.
http://localhost/dir/
And, YES ... it is annoying to have to add extra characters (as a work around) to get URLs to work right.
I might be a bit late to the party here but I just encountered this same problem on the current chrome (v. 96.0.4664.93) and could it would not resolve any local address using any of the other methods that I found on the web(http://192.168.178.1 or setting flags or insecure DNS).
What solved the problem, was a simple "/" at the beginning of the query string.
Whenever I enter
/192.168.178.1
into the Omnibox and hit enter it works for all services.
If some other people could verify this - I'd be really happy to get some feedback on this approach.
I have this problem often myself. First, go to just localhost, chrome should understand that, then, when you type in localhost/my/url/here, the context will allow chrome to understand to treat it as a url.