Rails5 "Can't verify CSRF token authenticity" issue with subdomain in production environment - subdomain

I have a subdomain:
https://admin.mysite.com
In my production environment when I sign in using devise form, I get error "Cant verify CSRF token authenticity".
I did a lot of research on google and got to know that I need to make a change in initializers/session_store.rb. My default session_store.rb file contains:
Rails.application.config.session_store :cookie_store, key: '_myapp_session'
Someone said that :domain should be set to ".mysite.com" and some set that it should be :all. I had tried all combinations including tld options as well but I was still getting that error.
:cookie_store,
{
:key => '_myapp_session',
:domain => :all, # :all defaults to da tld length of 1, '.web' has length of 1
:tld_length => 2 # Top Level Domain (tld) length -> '*.myapp.web' has a length of 2
}
Please help, thanks.

After trying alot of combinations it turned out that I had to include ssl setting in my nginx file and didn't need to change session_store.rb at all.
I had added following line in my nginx file and everything seemed to be working fine.
proxy_set_header X-Forwarded-Ssl on;
Note: If you have many specific domains and that your application requirement is complex then you might have to change this file but in my case I had just this subdomain and not even a main domain handling my site, I was ok with it. In my case Rails5 automatically handled it and I didn't need to change anything in my app except that SSL setting in my nginx file.
I hope this will help someone else, :).

Related

Junk characters in URL when domain forwarding

I'm facing this issue lately, I have forwarded my domain to one of the files which are hosted on my GoDaddy shared hosting. However, whenever I hit the domain name in the browser it leads to the respective file (.html ) along with the junk characters preceding.
Example:
www.domainname.info
Leads to:
https://www.mydomainname.in/coffee.html/NjSmZ/KiKgZ/
Result:
Error 404 page not found.
Haven't changed any code; it's a sudden behavior.
UPDATE (more info):
The NjSmZ/KiKgZ/ are the junk characters in the link. Forwarding is made through the GoDaddy domain forwarder itself. No external coding is done for forwarding.
www.Aitb.in is the domain which is been forwarded to advity.in/adarsha.html.
While I know not about how GoDaddy does its domain forwards internally, it does not seem to be a simple DNS CNAME as nothing shows on the current domain's lookup.
While playing around, looking at the forwarded domain's response I see it delivers a 301 (moved permanently) http response. The response replaces the chosen domain with the new one, and keeps the path part of the URL intact.
Considering domain.a is the forwarded domain and domain.b is the new domain, that means:
http://domain.a/ => http://domain.b/
http://domain.a/contact.html => http://domain.b/contact.html
http://domain.a/a/long/path/ => http://domain.b/a/long/path/
But in your case, you are forwarding to more than just a domain... domain.b is more like domain.b/coffee.html , following the same rule, this means:
http://domain.a/ => http://domain.b/coffee.html
http://domain.a/contact.html => http://domain.b/coffee.html/contact.html
http://domain.a/a/long/path/ => http://domain.b/coffee.html/a/long/path/
So, my suggestion here is, either use a better landing to url_rewrite the redirected paths to the correct one. Or, if you cannot you could try to add a ? or # at the end of your URL. This is pure speculation, but if the rewrite has no other hidden rules, this would give something like the following, which will make the appropriate request and "hide" the trash part.
http://domain.a/ => http://domain.b/coffee.html?
http://domain.a/contact.html => http://domain.b/coffee.html?/contact.html
http://domain.a/a/long/path/ => http://domain.b/coffee.html?/a/long/path/
The "junk characters" are certainly coming from GoDaddy and not from the original request. Domain Forwarding is just what GoDaddy calls their service that redirects web requests using a 301 or 302 redirect (or an iframe they call "masking"). The issue is - For whatever reason the GoDaddy web servers serving the redirects often append some "random" characters (as a subfolder) after the domain. In my experience the subfolder always appear directly after the domain, and before any path that may have been part of the original request. So, as Salketer says it is just a hack. But there is still an issue on GoDaddy's side'
Also, if you do use the hack and you use Google Analytics on your site, you may want to add something like ?x= rather than just ?. Then you can exclude the x parameter in Analytics and you won't end up with a hundred different URLs for you homepage.
I had this problem occur on several different domains controled by GoDaddy. I attempted several times to contact GoDaddy support to resolve the issue with no luck. Ultimately I decided to solve the problem myself because GoDaddy seems clueless to their problem.
Here is my solution:
Add this PHP code to the top of your 404 error page. For WordPress, add this your theme's 404.php file:
<?php
/* GoDaddy 404 Redirects FIX - by Daniel Chase - https://riseofweb.com */
$currURL = $_SERVER['REQUEST_URI'];
$CheckRedirectError1 = substr($currURL, -6);
$CheckRedirectError2 = substr($currURL, 0, 7);
$CheckRedirectError = false;
if (preg_match("/^[a-zA-Z]{5}\/$/",$CheckRedirectError1)){
$CheckRedirectError = $CheckRedirectError1;
}else if (preg_match("/^\/[a-zA-Z]{5}\/$/",$CheckRedirectError2)){
$CheckRedirectError = substr($CheckRedirectError2, 1);
}
if($CheckRedirectError){
$protocol = (!empty($_SERVER['HTTPS']) && $_SERVER['HTTPS'] !== 'off' || $_SERVER['SERVER_PORT'] == 443) ? "https://" : "http://";
$redirectTo = str_replace($CheckRedirectError, '', $currURL);
header("HTTP/1.1 301 Moved Permanently");
header("Location: " . $protocol . $_SERVER['HTTP_HOST'] . $redirectTo);
exit();
}
?>
The script checks for the random characters and removes them, and then redirects to the proper page. You may need to add some exceptions or modify the script to fit your needs.
Thank you,
I ended up solving this issue by adding "?" at the end of the domain forwarding link
example: mydomain.com/main/foo.html?
or
example: mydomain.com/main/foo.html#

Parsoid test page fail during VisualEditor installation

I'm trying to install VisualEditor in my MediaWiki wiki but I get stuck when I test Parsoid.
This is the result of the test page:
error: No API URI available for prefix: enwiki; domain: undefined path: /_rt/mediawikiwiki/Parsoid
Error: No API URI available for prefix: enwiki; domain: undefined
at /usr/lib/parsoid/src/lib/config/MWParserEnvironment.js:295:10
at /usr/lib/parsoid/node_modules/prfun/lib/index.js:532:26
at tryCatch2 (/usr/lib/parsoid/node_modules/babybird/lib/promise.js:48:12)
at PrFunPromise.Promise (/usr/lib/parsoid/node_modules/babybird/lib/promise.js:458:15)
at new PrFunPromise (/usr/lib/parsoid/node_modules/prfun/lib/index.js:57:21)
at /usr/lib/parsoid/node_modules/prfun/lib/index.js:530:18
at tryCatch1 (/usr/lib/parsoid/node_modules/babybird/lib/promise.js:40:12)
at promiseReactionJob (/usr/lib/parsoid/node_modules/babybird/lib/promise.js:269:19)
at PromiseReactionJobTask.call (/usr/lib/parsoid/node_modules/babybird/lib/promise.js:284:3)
at flush (/usr/lib/parsoid/node_modules/babybird/node_modules/asap/raw.js:50:29)
I set the API in the settings.js file end to make sure it is correct I tested using the curl command. And it works.
But I still have the problem.
Any suggestion?
You would've put something like this in Parsoid's localsettings.js:
parsoidConfig.setInterwiki( 'localhost', 'http://mediawiki.krenair.dev/mediawiki_dev/w/api.php' );
(example from my dev wiki setup)
That first string (in my case, 'localhost') should be identical to the value VE is set to use by $wgVisualEditorParsoidPrefix in your wiki's LocalSettings.php (unless you're using some other system to configure that stuff like VirtualRestConfig, in which case I can probably help in the comments). I believe you currently have it set to 'enwiki' for some reason, or else something is going wrong leading parsoid to default to 'enwiki' (I really don't know why they consider that a sane default).

JSON syntax error in Opencart 2.0.3.2 RC multi store

Via github I installed the 2.0.3.2. RC version on my digital ocean VPS. All seemed to work fine, but just like many others i got problems with the JSON syntax error. I spent hours reading through forum pages about
API users that have to be made
API users that have to be appointed
Maintenance mode that had to be switched off
the json = array(); solution
and cUrl loopback restrictions (including the vqmod curl loopback workaround ) http://forum.opencart.com/viewtopic.php?f=191&t=146714
All of these solutions didn't seem to work... When i finally found out that I had my VPS access restricted on IP address and removed this restriction the order history update seemed to work fine so I assumed ALL was ok.
Today when I tried to edit an order, the same following error came popping up. So I started going over the forums again for a solution.
While heavily frustrated trying things i bumped in to this strange behaviour. When on the first page of order editing I get the error, but when I select the standard shop... all works fine... I can edit the order exactly how i want... but when i switch the option back to the store the order was placed in... it responds directly with the same error (see attachment).
I'm not sure if there are any other multistore users that are on 2.0.3+ that have shops that are working fine?
Could you think with me? Could it be something with the Cross-Origin Resource Sharing policy? All suggestions are welcome!
Go to Settings, edit your store (not Default),
and on first tab (Genaral), make sure that your SSL URL is set.
If you don't have SSL, then set the same value as Store URL.
Hope this helps.
Probably a cross origin policy issue as you mentioned. I solved this issue on 1.5.6 as well as the crossdomain cookie issue (which has never worked properly to my knowledge on any version) by adding:
xhrFields: { withCredentials: true },
In the AJAX request as well as setting access-control-allow-credentials on the receiving header. The trick here is that for cross origin headers to work this way you need to explicitly declare the URL which is allowed (i.e., Header set Access-Control-Allow-Origin "*" will not work). The next trick is that you don't want to accept these headers from any and every URL.
To work around this, I added something like this to the manual.php controller - which in 2.0+ would be api/order.php (and for cross domain cookie sharing common/header.php as well):
$this->load->model('setting/store');
$allowed[] = trim(HTTP_SERVER,'/');
$allowed[] = trim(HTTPS_SERVER, '/');
$stores = $this->model_setting_store->getStores();
foreach ($stores as $store) {
if ($store['url']) $allowed[] = strtolower(trim($store['url'],'/'));
if ($store['ssl']) $allowed[] = strtolower(trim($store['ssl'],'/'));
}
if (isset($this->request->server['HTTP_REFERER'])) {
$url_parts = parse_url($this->request->server['HTTP_REFERER']);
$origin = strtolower($url_parts['scheme'] . '://' . $url_parts['host']);
if (in_array($origin,$allowed)) {
header("access-control-allow-origin: " . $origin);
header("access-control-allow-credentials: true");
} else {
header("access-control-allow-origin: *");
}
} else {
header("access-control-allow-origin: *");
}
header("access-control-allow-headers: Origin, X-Requested-With, Content-Type, Accept");
header("access-control-allow-methods: PUT, GET, POST, DELETE, OPTIONS");
This would basically create an array of all acceptable URLs and if the request is valid it sets the HTTP headers explicitly to allow cookies and session data. This was primarily a fix for cross-domain cookie sharing but I have a feeling it may be helpful for working around the 2.0 api issue as well.
A colleague of me found out the api calls are always done through ssl, all I had to do is add the normal store url in the SSL field in the settings from the store (not the main).

Masking an image's real URL in Rails

I'm creating an incredibly basic photo sharing app in Rails that displays albums from the local filesystem.
For example -
/path/to/pictures
|
|-> 2003_college_graduation
|-> 2002_miami_spring_break
However, anyone can take a look at the HTML source and get the absolute path to the image -
my.server.com/path/to/pictures/2003_college_graduation/IMG_0001.JPG
And with a little guesswork, anyone could navigate to other images on the server, even ones they don't have permission to.
Is there any way to "mask" the URL here?
One potential solution is to hash each filepath into a UUID and store the mappings in mysql table. Then when they request the URL with that hash I can look it up in the table and pull the correct image. But that makes the URL looks messy and creates a problem if the URL ever changes (because the hash will change).
Are there any libraries or other workarounds to mask the real path to a file?
Thanks!
You could use a url minifier (take your pick) and use that link. They'd still be able to see the original source if they followed it, but it would get it out of the html file.
What you're trying to achieve here is a security through obscurity, which isn't going to work in the end. One can get aware of the scrambled URLs from any other source and still have access to the pics he should not be seeing.
The real solution is to actually control access to the files. It is a pretty common problem with a pretty common solution. In order to force access control you have to invoke a Rails controller action before serving the file and verify the credentials, and then, if the credentials are valid, serve the actual file.
It could be like this in the controller:
class PhotoController < ApplicationController
def photo
if user_has_access?(params[:album], params[:photo])
# be *very* careful here to ensure that user_has_access? really validates
# album and photo access, otherwise, there's a chance of you letting a malicious
# user to get any file from your system by feeding in certain params[:album]
# and params[:photo]
send_file(File.join('/path/to/albums', params[:album], "#{params[:photo]}.jpg"), type: 'image/jpeg', disposition: 'inline')
else
render(file: File.join(Rails.root, 'public/403.html'), status: 403, layout: false)
end
end
private
def user_has_access?(album, photo)
# validate that the current user has access and return true, if he does,
# and false if not
end
end
And then in your routes file:
get '/photos/:album/:photo.jpg' => 'photo#photo', as: album_photo
And then in your views:
<%= image_tag album_photo_path('album', 'photo') %>
What's good about send_file is that it simply serves the file out of Rails in development mode, but in production it can be configured to offload it to the actual webserver to keep the performance of your Rails code optimal.
Hope that gives a basic idea of what it might be and helps a bit!

503 (Service Unavailable) when trying to Create File to Google Drive

We've been getting a 503 error since yesterday when making this call:
result = session.execute(
api_method: drive.files.insert,
body_object: file,
media: media,
parameters: {'uploadType' => 'multipart', 'alt' => 'json'}
)
We have 3 set of keys - one each for our development, staging, and production environments.
The above call works without issue in our development environment, but fails 100% of the time in both staging and production environments
Based on the gists shared privately, it looks like the issue is the user agent header. I was able to reproduce by setting the UA similar to that one. I haven't been able to narrow it down to a particular issue with the UA, just that it doesn't work. Other multi-line strings work, and other chars seem fine if I replace the newlines.
Anyway, easiest thing to do is set :application_name with a different value that what you're using now.