Junk characters in URL when domain forwarding - html

I'm facing this issue lately, I have forwarded my domain to one of the files which are hosted on my GoDaddy shared hosting. However, whenever I hit the domain name in the browser it leads to the respective file (.html ) along with the junk characters preceding.
Example:
www.domainname.info
Leads to:
https://www.mydomainname.in/coffee.html/NjSmZ/KiKgZ/
Result:
Error 404 page not found.
Haven't changed any code; it's a sudden behavior.
UPDATE (more info):
The NjSmZ/KiKgZ/ are the junk characters in the link. Forwarding is made through the GoDaddy domain forwarder itself. No external coding is done for forwarding.
www.Aitb.in is the domain which is been forwarded to advity.in/adarsha.html.

While I know not about how GoDaddy does its domain forwards internally, it does not seem to be a simple DNS CNAME as nothing shows on the current domain's lookup.
While playing around, looking at the forwarded domain's response I see it delivers a 301 (moved permanently) http response. The response replaces the chosen domain with the new one, and keeps the path part of the URL intact.
Considering domain.a is the forwarded domain and domain.b is the new domain, that means:
http://domain.a/ => http://domain.b/
http://domain.a/contact.html => http://domain.b/contact.html
http://domain.a/a/long/path/ => http://domain.b/a/long/path/
But in your case, you are forwarding to more than just a domain... domain.b is more like domain.b/coffee.html , following the same rule, this means:
http://domain.a/ => http://domain.b/coffee.html
http://domain.a/contact.html => http://domain.b/coffee.html/contact.html
http://domain.a/a/long/path/ => http://domain.b/coffee.html/a/long/path/
So, my suggestion here is, either use a better landing to url_rewrite the redirected paths to the correct one. Or, if you cannot you could try to add a ? or # at the end of your URL. This is pure speculation, but if the rewrite has no other hidden rules, this would give something like the following, which will make the appropriate request and "hide" the trash part.
http://domain.a/ => http://domain.b/coffee.html?
http://domain.a/contact.html => http://domain.b/coffee.html?/contact.html
http://domain.a/a/long/path/ => http://domain.b/coffee.html?/a/long/path/

The "junk characters" are certainly coming from GoDaddy and not from the original request. Domain Forwarding is just what GoDaddy calls their service that redirects web requests using a 301 or 302 redirect (or an iframe they call "masking"). The issue is - For whatever reason the GoDaddy web servers serving the redirects often append some "random" characters (as a subfolder) after the domain. In my experience the subfolder always appear directly after the domain, and before any path that may have been part of the original request. So, as Salketer says it is just a hack. But there is still an issue on GoDaddy's side'
Also, if you do use the hack and you use Google Analytics on your site, you may want to add something like ?x= rather than just ?. Then you can exclude the x parameter in Analytics and you won't end up with a hundred different URLs for you homepage.

I had this problem occur on several different domains controled by GoDaddy. I attempted several times to contact GoDaddy support to resolve the issue with no luck. Ultimately I decided to solve the problem myself because GoDaddy seems clueless to their problem.
Here is my solution:
Add this PHP code to the top of your 404 error page. For WordPress, add this your theme's 404.php file:
<?php
/* GoDaddy 404 Redirects FIX - by Daniel Chase - https://riseofweb.com */
$currURL = $_SERVER['REQUEST_URI'];
$CheckRedirectError1 = substr($currURL, -6);
$CheckRedirectError2 = substr($currURL, 0, 7);
$CheckRedirectError = false;
if (preg_match("/^[a-zA-Z]{5}\/$/",$CheckRedirectError1)){
$CheckRedirectError = $CheckRedirectError1;
}else if (preg_match("/^\/[a-zA-Z]{5}\/$/",$CheckRedirectError2)){
$CheckRedirectError = substr($CheckRedirectError2, 1);
}
if($CheckRedirectError){
$protocol = (!empty($_SERVER['HTTPS']) && $_SERVER['HTTPS'] !== 'off' || $_SERVER['SERVER_PORT'] == 443) ? "https://" : "http://";
$redirectTo = str_replace($CheckRedirectError, '', $currURL);
header("HTTP/1.1 301 Moved Permanently");
header("Location: " . $protocol . $_SERVER['HTTP_HOST'] . $redirectTo);
exit();
}
?>
The script checks for the random characters and removes them, and then redirects to the proper page. You may need to add some exceptions or modify the script to fit your needs.

Thank you,
I ended up solving this issue by adding "?" at the end of the domain forwarding link
example: mydomain.com/main/foo.html?
or
example: mydomain.com/main/foo.html#

Related

Error contacting the Parsoid/RESTBase server: http-bad-status on Fresh Mediawiki 1.35.0 LTS

https://www.mediawiki.org/wiki/MediaWiki_1.35 is out and one of the advertise features seems to be the "built in"/"out of the box" Visual Editor that doesn't need an external server anymore.
So downloaded and installed the version just released and clicked "VisualEditor" so that it would appear in my LocalSettings.php as:
wfLoadExtension( 'VisualEditor' );
But when trying to edit a page the error message:
Error contacting the Parsoid/RESTBase server: http-bad-status
With no further hint on what to do.
The information in https://www.mediawiki.org/wiki/Extension:VisualEditor is still intimidating for me - it doesn't look like an "out of the box" configuration at all. I did not find anything there about the dialog's message content.
Where do i find the official information on how to avoid this dialog?
I've managed to wake up visual editor on an apache / ubuntu with mediawiki 1.37 set to private wiki.
This is what I've done
$wgServer = "https://example.org";
Note the https in wgServer!
End of my LocalSettings.php
if ( isset( $_SERVER['REMOTE_ADDR'] ) &&
in_array( $_SERVER['REMOTE_ADDR'], [ $_SERVER['SERVER_ADDR'], '127.0.0.1' ] ) ) {
$wgGroupPermissions['*']['read'] = true;
$wgGroupPermissions['*']['edit'] = true;
$wgGroupPermissions['*']['writeapi'] = true;
}
Making sure that $wgServer in LocalSettings.php has https and not http in the string solved it for me.
If you are using the HTTP based authentication of your webserver you have to allow localhost to be whitelisted, so MediaWiki can reach itself.
For Apache you can do this with
Require local
at the same spot where you configured the authentication. You can find detailed configuration descriptions in the MediaWiki Wiki.
https://www.mediawiki.org/wiki/Topic:Vwkv6abtipmknci8
However i would not recommend to use whitelisting based on the user agent. Attackers could circumvent the authentication just by changing their user agent string.
In my case I only run into this problem, when I use a "nested" or structured wiki page.
It works for pages like
TestPage, VideoCut, BestPractices but not pages like
TestPage/Test1, TestPage/Hugo and so on.
When looking at the webserver log page it seams the rest.php URL is not build correctly.
In the good case the build rest.php send the following POST request:
POST /wiki/rest.php/localhost/v3/transform/html/to/wikitext/TestPage/12 HTTP/1.1" 200 521 "-" "VisualEditor-MediaWiki/1.38.2"
In the bad case the request looks like:
POST /wiki/rest.php/localhost/v3/transform/html/to/wikitext/TestPage%2FTest1 HTTP/1.1" 404 981 "-" "VisualEditor-MediaWiki/1.38.2"
It ends-up in a 404 instead of a successful 200. The problem seams to be the coded %2F (/) inside the Page-Path (TestPage/Test1 -> TestPage%2FTest1).

Rails5 "Can't verify CSRF token authenticity" issue with subdomain in production environment

I have a subdomain:
https://admin.mysite.com
In my production environment when I sign in using devise form, I get error "Cant verify CSRF token authenticity".
I did a lot of research on google and got to know that I need to make a change in initializers/session_store.rb. My default session_store.rb file contains:
Rails.application.config.session_store :cookie_store, key: '_myapp_session'
Someone said that :domain should be set to ".mysite.com" and some set that it should be :all. I had tried all combinations including tld options as well but I was still getting that error.
:cookie_store,
{
:key => '_myapp_session',
:domain => :all, # :all defaults to da tld length of 1, '.web' has length of 1
:tld_length => 2 # Top Level Domain (tld) length -> '*.myapp.web' has a length of 2
}
Please help, thanks.
After trying alot of combinations it turned out that I had to include ssl setting in my nginx file and didn't need to change session_store.rb at all.
I had added following line in my nginx file and everything seemed to be working fine.
proxy_set_header X-Forwarded-Ssl on;
Note: If you have many specific domains and that your application requirement is complex then you might have to change this file but in my case I had just this subdomain and not even a main domain handling my site, I was ok with it. In my case Rails5 automatically handled it and I didn't need to change anything in my app except that SSL setting in my nginx file.
I hope this will help someone else, :).

Website not opening google search http to https

We have a website and our domain is on dnsimple and server is on heroku. We configured all necessary steps with ssl both on heroku & dnsimple.
When you type any of these 4 urls to url bar it works.
But the problem is, when I search on google as my website and click on the link, it is not opening especially with internet explorer & safari. It gives 404 error.
Error from the console;
ActionController::RoutingError (No route matches "https://www.google.com.tr/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwiU0JX39YnQAhWJEiwKHSq2BpQQFggcMAA&url=https%3A%2F%2Fwww.example.com%2F&usg=AFQjCNFg0D1KFk0WGvYOfUoVzNm19KDBYw&bvm=bv.137132246,d.bGg"):
I have added all 4 websites to google search console as;
https://www.example.com
https://example.com
http://www.example.com
http://example.com
Its been more than 24 hours btw. For internet explorer I tried to remove history and flush dns config. Still no luck.
EDIT:
There is no checks as referrer. But there is a code to show header or not as in application controller but I do not think this is relevant;
before_filter :set_locale, :location_action_name
def location_action_name
if !logged_in?
url = Rails.application.routes.recognize_path(request.referrer)
#last_controller = url[:controller]
#last_action = url[:action]
end
end
EDIT2:
I removed the below code and pushed to heroku now it works. But why is that?
The issue is caused by the following code fragment:
url = Rails.application.routes.recognize_path(request.referrer)
#last_controller = url[:controller]
#last_action = url[:action]
As far as I remember (and I guess, given that recognize_path was removed in recent versions of Rails) recognize_path raises a routing error in case it can't find the path.
You are passing arbitrary strings to the function, but the function only recognizes paths internally described in the router.
request.referrer would return 3 different types of URL:
blank: when no referrer
external URL: when the visitor clicks elsewhere and reaches your app
internal URL: when the visitor clicks somewhere in your app
In all cases except the third one, recognize_path will raise an error. The error will be caught by the default Rails handler and displayed as 404 (an unrecognized route results in a 404 in production).
I'm not sure what location_action_name is supposed to do, but as it is implemented now it's very fragile and in most cases it will result into causing 404 responses in your app.

Permanent redirects by MediaWiki: when?

some time ago I set up a MediaWiki installation with very short URLs like http://(mydomain)/Page_Title.
I made sure it worked, even if the page was called /etc: http://(mydomain)//etc opened correctly.
Now I found out that some time ago this stopped working. Instead, MW 1.26 and 1.27 under HHVM 3.14.3 and nginx 1.10.1 provides an circular permanent redirect (code 301) to http://(mydomain)//etc in response for http://(mydomain)//etc and even http://(mydomain)/w/index.php?title=/etc. The redirect is issued not by nginx but by HHVM and therefore by MediaWiki.
I do not know whether I have broken something in MediaWiki configuration (it's huge, so I will not provide it) or some new bug has been introduced into MediaWiki or HHVM.
My question is: where are the places (files or classes) in MediaWiki core code that can reply with 301 code to a simple page view, so I can look which configuration settings affect this behaviour?
Most of the places that return a permanent redirect are in a new (since MediaWiki 1.26.3) file include/MediaWiki.php: lines 230, 286, 341, 353.
In my case, the hack I previously added to LocalSettings.php to deal with ultra-short URLs (see below) contributed to the endless redirect. Whithout it, the redirect woud be: http[s]://(mydomain)//etc → http[s]://etc, which is totally wrong to, but not circular.
The hack:
// This is for titles starting with /: [[/etc]] → "//etc" → "/.//etc".
// Set merge_slashes off in nginx config!
$wgHooks ['GetLocalURL'] [] = function ($title, &$url, $query) {
if (mb_substr ($title->getText (), 0, 1) === '/'
&& $title->getNamespace () === 0
&& !MWNamespace::hasSubpages (0 /* the same as $title->getNamespace () but faster */)
) {
$url = '/.' . $url;
}
return true;
};
URL was "normalised": in lines 792-799 of include/WebRequest.php: several leading slashes there are replaced with one; then MediaWiki::tryNormaliseRedirect () saw that the "normalised" URL is not equal to the original one and served 301.
This looks like a rather old bug in WebRequest::getGlobalRequestURL(), which was unmasked only in MediaWiki 1.26.3; so I filed it: https://phabricator.wikimedia.org/T141444.

JSON syntax error in Opencart 2.0.3.2 RC multi store

Via github I installed the 2.0.3.2. RC version on my digital ocean VPS. All seemed to work fine, but just like many others i got problems with the JSON syntax error. I spent hours reading through forum pages about
API users that have to be made
API users that have to be appointed
Maintenance mode that had to be switched off
the json = array(); solution
and cUrl loopback restrictions (including the vqmod curl loopback workaround ) http://forum.opencart.com/viewtopic.php?f=191&t=146714
All of these solutions didn't seem to work... When i finally found out that I had my VPS access restricted on IP address and removed this restriction the order history update seemed to work fine so I assumed ALL was ok.
Today when I tried to edit an order, the same following error came popping up. So I started going over the forums again for a solution.
While heavily frustrated trying things i bumped in to this strange behaviour. When on the first page of order editing I get the error, but when I select the standard shop... all works fine... I can edit the order exactly how i want... but when i switch the option back to the store the order was placed in... it responds directly with the same error (see attachment).
I'm not sure if there are any other multistore users that are on 2.0.3+ that have shops that are working fine?
Could you think with me? Could it be something with the Cross-Origin Resource Sharing policy? All suggestions are welcome!
Go to Settings, edit your store (not Default),
and on first tab (Genaral), make sure that your SSL URL is set.
If you don't have SSL, then set the same value as Store URL.
Hope this helps.
Probably a cross origin policy issue as you mentioned. I solved this issue on 1.5.6 as well as the crossdomain cookie issue (which has never worked properly to my knowledge on any version) by adding:
xhrFields: { withCredentials: true },
In the AJAX request as well as setting access-control-allow-credentials on the receiving header. The trick here is that for cross origin headers to work this way you need to explicitly declare the URL which is allowed (i.e., Header set Access-Control-Allow-Origin "*" will not work). The next trick is that you don't want to accept these headers from any and every URL.
To work around this, I added something like this to the manual.php controller - which in 2.0+ would be api/order.php (and for cross domain cookie sharing common/header.php as well):
$this->load->model('setting/store');
$allowed[] = trim(HTTP_SERVER,'/');
$allowed[] = trim(HTTPS_SERVER, '/');
$stores = $this->model_setting_store->getStores();
foreach ($stores as $store) {
if ($store['url']) $allowed[] = strtolower(trim($store['url'],'/'));
if ($store['ssl']) $allowed[] = strtolower(trim($store['ssl'],'/'));
}
if (isset($this->request->server['HTTP_REFERER'])) {
$url_parts = parse_url($this->request->server['HTTP_REFERER']);
$origin = strtolower($url_parts['scheme'] . '://' . $url_parts['host']);
if (in_array($origin,$allowed)) {
header("access-control-allow-origin: " . $origin);
header("access-control-allow-credentials: true");
} else {
header("access-control-allow-origin: *");
}
} else {
header("access-control-allow-origin: *");
}
header("access-control-allow-headers: Origin, X-Requested-With, Content-Type, Accept");
header("access-control-allow-methods: PUT, GET, POST, DELETE, OPTIONS");
This would basically create an array of all acceptable URLs and if the request is valid it sets the HTTP headers explicitly to allow cookies and session data. This was primarily a fix for cross-domain cookie sharing but I have a feeling it may be helpful for working around the 2.0 api issue as well.
A colleague of me found out the api calls are always done through ssl, all I had to do is add the normal store url in the SSL field in the settings from the store (not the main).