HTTP/2 stream 1 was not closed cleanly: PROTOCOL_ERROR - google-chrome

I have this configuration in my nginx :
location ~* ^/test(.*) {
add_header "Access-Control-Allow-Origin" $http_origin;
add_header "Access-Control-Allow-Credentials" "true";
# PLEASE NOT THIS ONE, IT IS SEPARATED BY NEWLINE
add_header "Access-Control-Allow-Headers" "Access-Control-Allow-Origin,Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Re
quested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range,sid_internal,access_token,Referer";
add_header "Access-Control-Allow-Methods" "GET,POST,DELETE,OPTIONS,PATCH";
# Preflighted requests
if ($request_method = OPTIONS ) {
add_header "Access-Control-Allow-Origin" $http_origin;
add_header "Access-Control-Allow-Credentials" "true";
add_header "Access-Control-Allow-Methods" "GET,POST,DELETE,OPTIONS,HEAD,PATCH";
# PLEASE NOT THIS ONE, IT IS SEPARATED BY NEWLINE
add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Requested-With, Content-Type, Accept, sid_internal, access_token, R
eferer";
return 204;
}
Suppose the server name is example.com
When I hit https://example.com/test :
With Chrome and Opera Browser, I got HTTP/2 stream 1 was not closed cleanly: PROTOCOL_ERROR
With Firefox, it works perfectly fine
When I do curl from my linux terminal, I got HTTP/2 stream 1 was not closed cleanly: PROTOCOL_ERROR
But, I do some fixing by removing the newline separated config in nginx, so it become like this :
location ~* ^/test(.*) {
add_header "Access-Control-Allow-Origin" $http_origin;
add_header "Access-Control-Allow-Credentials" "true";
# PLEASE NOTE THIS ONE, NEWLINE REMOVED
add_header "Access-Control-Allow-Headers" "Access-Control-Allow-Origin,Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range,sid_internal,access_token,Referer";
add_header "Access-Control-Allow-Methods" "GET,POST,DELETE,OPTIONS,PATCH";
# Preflighted requests
if ($request_method = OPTIONS ) {
add_header "Access-Control-Allow-Origin" $http_origin;
add_header "Access-Control-Allow-Credentials" "true";
add_header "Access-Control-Allow-Methods" "GET,POST,DELETE,OPTIONS,HEAD,PATCH";
# PLEASE NOTE THIS ONE, NEWLINE REMOVED
add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Requested-With, Content-Type, Accept, sid_internal, access_token, Referer";
return 204;
}
After I change that nginx config, all is working (in chrome, in opera, and in my linux terminal)
Well, my problem is actually solved.
But just wondering, anyone knows why the first nginx config (the one with newline separator) caused a PROTOCOL_ERROR ?

It is probably not possible to have a new line character in the header value in HTTP/2. I saw the same error PROTOCOL_ERROR after I wrongly put both header name with it's value as the first argument: add_header "Something: value";
HTTP/2 seem to be more strict regarding the headers.

Related

HTTP Request: Can see JSON response in browser and REST Client but not Browser [duplicate]

This question already has answers here:
Why does my JavaScript code receive a "No 'Access-Control-Allow-Origin' header is present on the requested resource" error, while Postman does not?
(13 answers)
Closed 1 year ago.
I'm trying to hit my own endpoint on a subdomain controlled by Nginx, I expect the request to fail and return a JSON payload like this:
{
"hasError": true,
"data": null,
"error": {
"statusCode": 401,
"statusDescription": null,
"message": "Could not find Session Reference in Request Headers"
}
}
When I make this request in a browser, it returns a 401 with this in the network tools (Brave Browser):
And this error in the console:
Access to fetch at 'https://services.mfwebdev.net/api/authentication/validate-session' from origin 'https://mfwebdev.net' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
When I hit the URL in question in the browser, I see the correct JSON response, if I hit the URL in a REST client like insomnia, I can see the JSON response.
The headers that browser is sending are:
:authority: services.mfwebdev.net
:method: GET
:path: /api/authentication/validate-session
:scheme: https
accept: application/json, text/plain, */*
accept-encoding: gzip, deflate, br
accept-language: en-GB,en-US;q=0.9,en;q=0.8
origin: https://mfwebdev.net
referer: https://mfwebdev.net/
sec-fetch-dest: empty
sec-fetch-mode: cors
sec-fetch-site: same-site
sec-gpc: 1
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36
I've actually used these headers in the REST client as well and I can still see the correct JSON result.
The request (in code) is as follows (Using Angular):
import { HttpClient } from '#angular/common/http';
import { Injectable } from '#angular/core';
import { Observable } from 'rxjs';
import { ApiResponse } from 'src/app/api/types/response/api-response.class';
import { ISessionApiService } from 'src/app/api/types/session/session-api-service.interface';
import { SessionResponse } from 'src/app/api/types/session/session-response.class';
import { environment } from '../../../environments/environment';
#Injectable()
export class SessionApiService implements ISessionApiService {
private readonly _http: HttpClient;
constructor(http: HttpClient) {
this._http = http;
}
public createSession(): Observable<ApiResponse<SessionResponse>> {
return this._http.post<ApiResponse<SessionResponse>>(`${environment.servicesApiUrl}/authentication/authorise`, {
reference: environment.applicationReference,
applicationName: environment.applicationName,
referrer: environment.applicationReferrer
});
}
public validateSession(): Observable<ApiResponse<boolean>> {
return this._http.get<ApiResponse<boolean>>(`${environment.servicesApiUrl}/authentication/validate-session`);
}
}
Could someone please help, I'm at a complete loss here.
EDIT!! For anyone using NginX who may come across this problem. The issue was in my nginx.conf file. I am leaving an example of my(now working) server-side configuration.
The reason it wasn't working was because I was not bothering to actually handle the request if an OPTIONS request came through.
I now handle every request type (or will) and append the ACCESS-CONTROL-ALLOW-ORIGIN header to the request.
user root;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {}
http {
include /etc/nginx/proxy.conf;
limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s;
server_tokens off;
sendfile on;
# Adjust keepalive_timeout to the lowest possible value that makes sense
# for your use case.
keepalive_timeout 1000;
client_body_timeout 1000;
client_header_timeout 10;
send_timeout 10;
upstream upstreamExample{
server 127.0.0.1:5001;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.net *.example.net;
ssl_certificate /etc/letsencrypt/live/example.net/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/example.net/privkey.pem;
ssl_session_timeout 1d;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling off;
location / {
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header 'Access-Control-Allow-Headers' '*';
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
add_header 'Access-Control-Allow-Headers' '*';
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
add_header 'Access-Control-Allow-Headers' '*';
}
if ($request_method = 'DELETE') {
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
add_header 'Access-Control-Allow-Headers' '*';
}
proxy_pass https://upstreamExample;
limit_req zone=one burst=10 nodelay;
}
}
}
You would need to enable CORS(Cross-Origin Resource Sharing) by sending appropriate response headers, one of them being Access-Control-Allow-Origin header, which tells the browser what all origins can access the resource.
CORS policy is something imposed by the browsers as a security measure, and not by REST clients such as Insomnia/Postman. Hence the HTTP request works in insomnia, but not in browser.
From MDN:
For security reasons, browsers restrict cross-origin HTTP requests initiated from scripts. For example, XMLHttpRequest and the Fetch API follow the same-origin policy. This means that a web application using those APIs can only request resources from the same origin the application was loaded from unless the response from other origins includes the right CORS headers.
HTTP request to subdomain doesn't fall under same-origin policy, and hence you would need to enable CORS.
Resources:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
https://developer.mozilla.org/en-US/docs/Glossary/Origin
https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy#definition_of_an_origin

Blazor WebAssembly nginx server returns HTML on *.css or *.js files

I'm having a hard time figuring out why the resources such as css and js files are returned as same as the index.html:
Like in the picture, each of those GET calls return the content of index.html instead of the original content.
Meanwhile my nginx configuration looks like this:
server {
server_name <DOMAIN>;
listen 443 ssl http2;
listen [::]:443 ssl http2;
add_header X-Frame-Options "SAMEORIGIN";
#add_header X-Content-Type-Options "nosniff";
add_header X-Robots-Tag "none";
add_header X-Download-Options "noopen";
add_header X-Permitted-Cross-Domain-Policies "none";
add_header X-XSS-Protection "1;mode=block";
add_header Strict-Transport-Security "max-age=15552000; includeSubDomains";
add_header Referrer-Policy "no-referrer";
client_max_body_size 1G;
location /ん尺 {
root /var/www/<DOMAIN>;
try_files $uri $uri/ /index.html =404;
index index.html;
gzip_static on;
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_types *;
gzip_proxied no-cache no-store private expired auth;
gzip_min_length 1000;
default_type application/octet-stream;
}
include /etc/nginx/ssl.conf;
ssl_certificate /etc/letsencrypt/live/<DOMAIN>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<DOMAIN>/privkey.pem;
}
As you can see the path is not / but /ん尺 because the / path is running something else.
And at the same my index.html base is <base href="/ん尺/"> so the resources point correctly in the beginning.
Is there something wrong with my setup?

CORS Issue after latest Chrome 85 Update

I am a very new user here so, apologies in advance if I break any rule. Here is the problem I am facing and need suggestions please.
I have a Chrome extension which works with Gmail & consumes APIs from my web server running on nginx through Phusion Passenger server of Rails application.
My Nginx version is nginx version: nginx/1.15.8 and Phusion Passenger version is Phusion Passenger Enterprise 6.0.1
I had the CORS settings in nginx as follows:
####### CORS Management ##########
add_header 'Access-Control-Allow-Origin' 'https://mail.google.com,https://*.gmail.com';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE, HEAD';
add_header Referrer-Policy "no-referrer";
add_header Pragma "no-cache";
##################################
This used to work until Chrome 84, however, with the latest update of Chrome 85, it has started throwing CORS errors as follows:
########## Error started appearing in Chrome 85 ############
Access to fetch at 'https://my-site.com/' from origin 'https://mail.google.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
##########################################
After this, I updated the CORS settings to wide open as per the suggestions/reference from various sources and blogs and now updated CORS setting looks like this:
UPDATED CORS Settings in Nginx
location / {
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' $http_origin always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
#
# Custom headers and headers various browsers *should* be OK with but aren't
#
add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type' always;
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8' always;
add_header 'Content-Length' 0 always;
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' $http_origin always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type' always;
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' $http_origin always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type' always;
}
}
############################################
After updating this setting in Nginx, the CORS error has gone but now I am getting 401 Unauthorized error from server when the extension makes API call.
I tried tweaking all the methods but couldn't fix it up. Is there something which I am missing or doing differently?
Please help!
Isn't that the effect of this spec change?
Changes to Cross-Origin Requests in Chrome Extension Content Scripts
https://www.chromium.org/Home/chromium-security/extension-content-script-fetches
I had the same problem. My solution was (as described in the link above) to move the Http-Requests into the background content script.You need to send a message to the background script and perform the request from there.
On receiving the response you need to send a message to the content script where you can handle the response data.
ContentPage BackgorundPage
-- RequestData -->
Initialize the request and return to the content script
.... some time later....
Callback of HttpRequest is finished
<-- handleResponse-- (In callback handler)
Content Script:
var msg = new Object();
msg.message = "loadOrders";
chrome.runtime.sendMessage(msg);
Background-Script:
chrome.runtime.onMessage.addListener(
function (msg, sender, sendResponse) {
if( msgName=="loadOrders") {
doXHRRequest( function(responseData) {
sendMessageToActiveTab(responseData);
});
}
function sendMessageToActiveTab(responseData) {
var msg = new Object();
msg.message = "receiveOrders";
msg.orderList = JSON.parse(Http.responseText);
chrome.tabs.query({active: true, currentWindow: true}, function(tabs) {
chrome.tabs.sendMessage(tabs[0].id, msg);
});
}
And last in the content script:
chrome.runtime.onMessage.addListener(function(message, callback) {
if( message.message == "receiveOrders") {
receiveOrderList(message.orderList);
}
return;
});
as stated in
https://developers.google.com/web/updates/2020/07/chrome-85-deps-rems
chrome will Reject insecure SameSite=None cookies
Use of cookies with SameSite set to None without the Secure attribute is no longer supported. Any cookie that requests SameSite=None but is not marked Secure will be rejected. This feature started rolling out to users of Stable Chrome on July 14, 2020. See SameSite Updates for a full timeline and details. Cookies delivered over plaintext channels may be cataloged or modified by network attackers. Requiring secure transport for cookies intended for cross-site usage reduces this risk.

ActiveCollab: missing Access-Control-Allow-Headers in ActiveCollab API responses

I am building a small JavaScript app to list tasks from ActiveCollab using the API, but I am getting into CORS issues.
The issue is occurring because the ActiveCollab API response does not include an Access-Control-Allow-Headers in the response, see https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors/CORSMissingAllowHeaderFromPreflight.
Would the ActiveCollab developers be willing to add the necessary headers to the API response?
Thank you,
Miguel
Answering my own question.
For local development I used a CORS browser extension (https://addons.mozilla.org/en-GB/firefox/addon/cors-everywhere) to work around the CORS missing headers.
In production, we serve the app via nginx and set up a proxy pass to set the correct headers. The app uses the proxy URL rather than the ActiveCollab API URL.
In the app settings:
VUE_APP_AC_API_URL = '<SERVER_URL>/ac-forwarder/<ACTIVECOLLAB_ACCOUNT>/api/v1'
In the nginx site settings:
location /ac-forwarder/ {
proxy_pass https://app.activecollab.com/;
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '<SERVER_URL>';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,X-Angie-AuthApiToken';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
if ($request_method = 'GET') {
#add_header 'Access-Control-Allow-Origin' '<SERVER_URL>';
add_header 'Access-Control-Allow-Methods' 'GET';
add_header 'Access-Control-Allow-Headers'
'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,X-Angie-AuthApiToken';
add_header 'Access-Control-Expose-Headers'
'Content-Length,Content-Range';
}
}
location /app {
alias /path/to/the/built/app;
}

nginx location alias stop redirect

I have following nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8080;
server_name localhost;
index index.html index.htm;
location /docs {
alias /usr/share/nginx/html;
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
#
# Custom headers and headers various browsers *should* be OK with but aren't
#
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}
}
}
}
nginx is running in docker. Traefik acts as proxy and redirects on /docs path into the nginx container (to port 8080). Here nginx container should simply return the content (static content).
My problem is that nginx always redirects me to http://api.example.com:8080/docs/ (which is not accessible because I run nginx in docker behind traefik, thats why I need the path). I simply try to get the HTML content from the html directory under https://api.example.com/docs.
Additional output:
10.0.5.16 - example [11/Aug/2018:17:30:45 +0000] "GET /docs HTTP/1.1" 301 185 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.117 Safari/537.36"
How to just serve content under ../docs Url without this redirections, which are wrong?
To avoid the external redirect, you could use an internal rewrite from /docs to /docs/index.html.
For example:
location = /docs {
rewrite ^ /docs/index.html last;
}
location /docs {
...
}
this helped for me(break after rewrite and second location for any files):
location = /docs {
root /usr/share/nginx/html;
rewrite ^ /docs/index.html break;
}
location /docs {
root /usr/share/nginx/html;
}