I am a very new user here so, apologies in advance if I break any rule. Here is the problem I am facing and need suggestions please.
I have a Chrome extension which works with Gmail & consumes APIs from my web server running on nginx through Phusion Passenger server of Rails application.
My Nginx version is nginx version: nginx/1.15.8 and Phusion Passenger version is Phusion Passenger Enterprise 6.0.1
I had the CORS settings in nginx as follows:
####### CORS Management ##########
add_header 'Access-Control-Allow-Origin' 'https://mail.google.com,https://*.gmail.com';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE, HEAD';
add_header Referrer-Policy "no-referrer";
add_header Pragma "no-cache";
##################################
This used to work until Chrome 84, however, with the latest update of Chrome 85, it has started throwing CORS errors as follows:
########## Error started appearing in Chrome 85 ############
Access to fetch at 'https://my-site.com/' from origin 'https://mail.google.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
##########################################
After this, I updated the CORS settings to wide open as per the suggestions/reference from various sources and blogs and now updated CORS setting looks like this:
UPDATED CORS Settings in Nginx
location / {
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' $http_origin always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
#
# Custom headers and headers various browsers *should* be OK with but aren't
#
add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type' always;
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8' always;
add_header 'Content-Length' 0 always;
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' $http_origin always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type' always;
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' $http_origin always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type' always;
}
}
############################################
After updating this setting in Nginx, the CORS error has gone but now I am getting 401 Unauthorized error from server when the extension makes API call.
I tried tweaking all the methods but couldn't fix it up. Is there something which I am missing or doing differently?
Please help!
Isn't that the effect of this spec change?
Changes to Cross-Origin Requests in Chrome Extension Content Scripts
https://www.chromium.org/Home/chromium-security/extension-content-script-fetches
I had the same problem. My solution was (as described in the link above) to move the Http-Requests into the background content script.You need to send a message to the background script and perform the request from there.
On receiving the response you need to send a message to the content script where you can handle the response data.
ContentPage BackgorundPage
-- RequestData -->
Initialize the request and return to the content script
.... some time later....
Callback of HttpRequest is finished
<-- handleResponse-- (In callback handler)
Content Script:
var msg = new Object();
msg.message = "loadOrders";
chrome.runtime.sendMessage(msg);
Background-Script:
chrome.runtime.onMessage.addListener(
function (msg, sender, sendResponse) {
if( msgName=="loadOrders") {
doXHRRequest( function(responseData) {
sendMessageToActiveTab(responseData);
});
}
function sendMessageToActiveTab(responseData) {
var msg = new Object();
msg.message = "receiveOrders";
msg.orderList = JSON.parse(Http.responseText);
chrome.tabs.query({active: true, currentWindow: true}, function(tabs) {
chrome.tabs.sendMessage(tabs[0].id, msg);
});
}
And last in the content script:
chrome.runtime.onMessage.addListener(function(message, callback) {
if( message.message == "receiveOrders") {
receiveOrderList(message.orderList);
}
return;
});
as stated in
https://developers.google.com/web/updates/2020/07/chrome-85-deps-rems
chrome will Reject insecure SameSite=None cookies
Use of cookies with SameSite set to None without the Secure attribute is no longer supported. Any cookie that requests SameSite=None but is not marked Secure will be rejected. This feature started rolling out to users of Stable Chrome on July 14, 2020. See SameSite Updates for a full timeline and details. Cookies delivered over plaintext channels may be cataloged or modified by network attackers. Requiring secure transport for cookies intended for cross-site usage reduces this risk.
Related
This question already has answers here:
Why does my JavaScript code receive a "No 'Access-Control-Allow-Origin' header is present on the requested resource" error, while Postman does not?
(13 answers)
Closed 1 year ago.
I'm trying to hit my own endpoint on a subdomain controlled by Nginx, I expect the request to fail and return a JSON payload like this:
{
"hasError": true,
"data": null,
"error": {
"statusCode": 401,
"statusDescription": null,
"message": "Could not find Session Reference in Request Headers"
}
}
When I make this request in a browser, it returns a 401 with this in the network tools (Brave Browser):
And this error in the console:
Access to fetch at 'https://services.mfwebdev.net/api/authentication/validate-session' from origin 'https://mfwebdev.net' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
When I hit the URL in question in the browser, I see the correct JSON response, if I hit the URL in a REST client like insomnia, I can see the JSON response.
The headers that browser is sending are:
:authority: services.mfwebdev.net
:method: GET
:path: /api/authentication/validate-session
:scheme: https
accept: application/json, text/plain, */*
accept-encoding: gzip, deflate, br
accept-language: en-GB,en-US;q=0.9,en;q=0.8
origin: https://mfwebdev.net
referer: https://mfwebdev.net/
sec-fetch-dest: empty
sec-fetch-mode: cors
sec-fetch-site: same-site
sec-gpc: 1
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36
I've actually used these headers in the REST client as well and I can still see the correct JSON result.
The request (in code) is as follows (Using Angular):
import { HttpClient } from '#angular/common/http';
import { Injectable } from '#angular/core';
import { Observable } from 'rxjs';
import { ApiResponse } from 'src/app/api/types/response/api-response.class';
import { ISessionApiService } from 'src/app/api/types/session/session-api-service.interface';
import { SessionResponse } from 'src/app/api/types/session/session-response.class';
import { environment } from '../../../environments/environment';
#Injectable()
export class SessionApiService implements ISessionApiService {
private readonly _http: HttpClient;
constructor(http: HttpClient) {
this._http = http;
}
public createSession(): Observable<ApiResponse<SessionResponse>> {
return this._http.post<ApiResponse<SessionResponse>>(`${environment.servicesApiUrl}/authentication/authorise`, {
reference: environment.applicationReference,
applicationName: environment.applicationName,
referrer: environment.applicationReferrer
});
}
public validateSession(): Observable<ApiResponse<boolean>> {
return this._http.get<ApiResponse<boolean>>(`${environment.servicesApiUrl}/authentication/validate-session`);
}
}
Could someone please help, I'm at a complete loss here.
EDIT!! For anyone using NginX who may come across this problem. The issue was in my nginx.conf file. I am leaving an example of my(now working) server-side configuration.
The reason it wasn't working was because I was not bothering to actually handle the request if an OPTIONS request came through.
I now handle every request type (or will) and append the ACCESS-CONTROL-ALLOW-ORIGIN header to the request.
user root;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {}
http {
include /etc/nginx/proxy.conf;
limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s;
server_tokens off;
sendfile on;
# Adjust keepalive_timeout to the lowest possible value that makes sense
# for your use case.
keepalive_timeout 1000;
client_body_timeout 1000;
client_header_timeout 10;
send_timeout 10;
upstream upstreamExample{
server 127.0.0.1:5001;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.net *.example.net;
ssl_certificate /etc/letsencrypt/live/example.net/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/example.net/privkey.pem;
ssl_session_timeout 1d;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling off;
location / {
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header 'Access-Control-Allow-Headers' '*';
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
add_header 'Access-Control-Allow-Headers' '*';
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
add_header 'Access-Control-Allow-Headers' '*';
}
if ($request_method = 'DELETE') {
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
add_header 'Access-Control-Allow-Headers' '*';
}
proxy_pass https://upstreamExample;
limit_req zone=one burst=10 nodelay;
}
}
}
You would need to enable CORS(Cross-Origin Resource Sharing) by sending appropriate response headers, one of them being Access-Control-Allow-Origin header, which tells the browser what all origins can access the resource.
CORS policy is something imposed by the browsers as a security measure, and not by REST clients such as Insomnia/Postman. Hence the HTTP request works in insomnia, but not in browser.
From MDN:
For security reasons, browsers restrict cross-origin HTTP requests initiated from scripts. For example, XMLHttpRequest and the Fetch API follow the same-origin policy. This means that a web application using those APIs can only request resources from the same origin the application was loaded from unless the response from other origins includes the right CORS headers.
HTTP request to subdomain doesn't fall under same-origin policy, and hence you would need to enable CORS.
Resources:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
https://developer.mozilla.org/en-US/docs/Glossary/Origin
https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy#definition_of_an_origin
I have this configuration in my nginx :
location ~* ^/test(.*) {
add_header "Access-Control-Allow-Origin" $http_origin;
add_header "Access-Control-Allow-Credentials" "true";
# PLEASE NOT THIS ONE, IT IS SEPARATED BY NEWLINE
add_header "Access-Control-Allow-Headers" "Access-Control-Allow-Origin,Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Re
quested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range,sid_internal,access_token,Referer";
add_header "Access-Control-Allow-Methods" "GET,POST,DELETE,OPTIONS,PATCH";
# Preflighted requests
if ($request_method = OPTIONS ) {
add_header "Access-Control-Allow-Origin" $http_origin;
add_header "Access-Control-Allow-Credentials" "true";
add_header "Access-Control-Allow-Methods" "GET,POST,DELETE,OPTIONS,HEAD,PATCH";
# PLEASE NOT THIS ONE, IT IS SEPARATED BY NEWLINE
add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Requested-With, Content-Type, Accept, sid_internal, access_token, R
eferer";
return 204;
}
Suppose the server name is example.com
When I hit https://example.com/test :
With Chrome and Opera Browser, I got HTTP/2 stream 1 was not closed cleanly: PROTOCOL_ERROR
With Firefox, it works perfectly fine
When I do curl from my linux terminal, I got HTTP/2 stream 1 was not closed cleanly: PROTOCOL_ERROR
But, I do some fixing by removing the newline separated config in nginx, so it become like this :
location ~* ^/test(.*) {
add_header "Access-Control-Allow-Origin" $http_origin;
add_header "Access-Control-Allow-Credentials" "true";
# PLEASE NOTE THIS ONE, NEWLINE REMOVED
add_header "Access-Control-Allow-Headers" "Access-Control-Allow-Origin,Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range,sid_internal,access_token,Referer";
add_header "Access-Control-Allow-Methods" "GET,POST,DELETE,OPTIONS,PATCH";
# Preflighted requests
if ($request_method = OPTIONS ) {
add_header "Access-Control-Allow-Origin" $http_origin;
add_header "Access-Control-Allow-Credentials" "true";
add_header "Access-Control-Allow-Methods" "GET,POST,DELETE,OPTIONS,HEAD,PATCH";
# PLEASE NOTE THIS ONE, NEWLINE REMOVED
add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Requested-With, Content-Type, Accept, sid_internal, access_token, Referer";
return 204;
}
After I change that nginx config, all is working (in chrome, in opera, and in my linux terminal)
Well, my problem is actually solved.
But just wondering, anyone knows why the first nginx config (the one with newline separator) caused a PROTOCOL_ERROR ?
It is probably not possible to have a new line character in the header value in HTTP/2. I saw the same error PROTOCOL_ERROR after I wrongly put both header name with it's value as the first argument: add_header "Something: value";
HTTP/2 seem to be more strict regarding the headers.
I'm trying to build a nginx-based maintenance mode application, that catches all requests to my applications and returns a predefined response as a 503.
I currently have applications requesting json responses as well as users accessing the pages with their browsers. So in case the request contains the header Accept: application/json, I want to respond with the content of a json file maintenance.json, otherwise with an html file maintenance.html.
My current nginx config looks like this:
server {
listen 8080;
root /usr/share/nginx/maintenance;
server_tokens off;
error_page 503 = #unavailable;
location ^~ / {
return 503;
}
location #unavailable {
set $maintenanceContentType text/html;
set $maintenanceFile /maintenance.html;
if ($http_accept = 'application/json') {
set $maintenanceContentType application/json;
set $maintenanceFile /maintenance.json;
}
default_type $maintenanceContentType;
try_files $uri $maintenanceFile;
}
}
For browser requests to any path this works out fine: "https://maintenance.my-domain.local/some-path". I get the response code and the html content.
But for requests with header Accept: application/json I get a 404 html page. And the nginx log shows [error] 21#21: *1 open() "/usr/share/nginx/maintenance/some-path" failed (2: No such file or directory), client: 10.244.2.65, server: , request: "GET /asd HTTP/1.1", host: "maintenance.my-domain.local".
It seems like json requests are ignoring my location for some reason. When I remove the directive to set the appropriate file and just always return the html this also works for json-requests.
Anyone any idea?
I'm not necessarily looking for a fix for this specific config, but rather for something that fits my needs of responding with different "error pages" based on the Accept header.
Thanks in advance!
EDIT: For some reason this now results in an HTTP 200 instead of a 503. Don't know what I changed..
EDIT2: Managed to fix a part of it:
server {
listen 8080;
root /usr/share/nginx/maintenance;
server_tokens off;
location ^~ / {
if ($http_accept = 'application/json') {
return 503;
}
try_files /maintenance.html =404;
}
error_page 503 /maintenance.json;
location = /maintenance.json {
internal;
}
}
With this config I now get the maintenance page when using the browser and the maintenance json, when defining the header Accept: application/json. The browser response code is 200 now though...
Ok, I found the solution to my problem.
# map the incoming Accept header to a file extension
map $http_accept $accept_ext {
default html;
application/json json;
}
server {
listen 8080;
root /usr/share/nginx/maintenance;
server_tokens off;
# return 503 for all incoming requests
location ^~ / {
return 503;
}
# a 503 redirects to the internal location `#maintenance`. the
# extension of the returned file is decided by the Accept header map
# above (404 in case the file is not found).
error_page 503 #maintenance;
location #maintenance {
internal;
try_files /maintenance.$accept_ext =404;
}
}
Key was the map on the top. I just added application/json there and mapped everything else to the html file by default. But you could add multiple other files/file types there of course.
I am building a small JavaScript app to list tasks from ActiveCollab using the API, but I am getting into CORS issues.
The issue is occurring because the ActiveCollab API response does not include an Access-Control-Allow-Headers in the response, see https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors/CORSMissingAllowHeaderFromPreflight.
Would the ActiveCollab developers be willing to add the necessary headers to the API response?
Thank you,
Miguel
Answering my own question.
For local development I used a CORS browser extension (https://addons.mozilla.org/en-GB/firefox/addon/cors-everywhere) to work around the CORS missing headers.
In production, we serve the app via nginx and set up a proxy pass to set the correct headers. The app uses the proxy URL rather than the ActiveCollab API URL.
In the app settings:
VUE_APP_AC_API_URL = '<SERVER_URL>/ac-forwarder/<ACTIVECOLLAB_ACCOUNT>/api/v1'
In the nginx site settings:
location /ac-forwarder/ {
proxy_pass https://app.activecollab.com/;
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '<SERVER_URL>';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,X-Angie-AuthApiToken';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
if ($request_method = 'GET') {
#add_header 'Access-Control-Allow-Origin' '<SERVER_URL>';
add_header 'Access-Control-Allow-Methods' 'GET';
add_header 'Access-Control-Allow-Headers'
'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,X-Angie-AuthApiToken';
add_header 'Access-Control-Expose-Headers'
'Content-Length,Content-Range';
}
}
location /app {
alias /path/to/the/built/app;
}
I have following nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8080;
server_name localhost;
index index.html index.htm;
location /docs {
alias /usr/share/nginx/html;
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
#
# Custom headers and headers various browsers *should* be OK with but aren't
#
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}
}
}
}
nginx is running in docker. Traefik acts as proxy and redirects on /docs path into the nginx container (to port 8080). Here nginx container should simply return the content (static content).
My problem is that nginx always redirects me to http://api.example.com:8080/docs/ (which is not accessible because I run nginx in docker behind traefik, thats why I need the path). I simply try to get the HTML content from the html directory under https://api.example.com/docs.
Additional output:
10.0.5.16 - example [11/Aug/2018:17:30:45 +0000] "GET /docs HTTP/1.1" 301 185 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.117 Safari/537.36"
How to just serve content under ../docs Url without this redirections, which are wrong?
To avoid the external redirect, you could use an internal rewrite from /docs to /docs/index.html.
For example:
location = /docs {
rewrite ^ /docs/index.html last;
}
location /docs {
...
}
this helped for me(break after rewrite and second location for any files):
location = /docs {
root /usr/share/nginx/html;
rewrite ^ /docs/index.html break;
}
location /docs {
root /usr/share/nginx/html;
}