Implement nginx websocket using json - json

I'm trying to implement websocket support using json the below code is what I'm using
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
I'm not able to understand how I can implement proxy_set_header in form of json
I've tried adding in the below form
{
proxy_http_version: "1.1",
proxy_set_header: $http_upgrade,
proxy_set_header: "upgrade"
}
It shows error for invalid parameter as I've not passed the "Upgrade" and "Connection". Is there any way I can include those in json ?

Related

Error on Google Chrome when connecting to Websocket proxy

I Have a simple back-end Kotlin application that runs a Netty server on a Google cloud virtual machine. It receives websocket connections and sends some simple messages to clients. I also have nginx server running on same machine, it listens to 443 port and redirects requests to my application (127.0.0.1:8080). Here is nginx configuration:
server {
listen 443 ssl;
server_name www.mydomain.com;
ssl_certificate /etc/nginx/certs/my-cert.crt;
ssl_certificate_key /etc/nginx/certs/my-key.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_read_timeout 86400;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
}
}
Ssl certificate in config is valid and certified by real CA.
Now I'm trying to write a simple front-end Angular app that connects to my proxy:
return webSocket({
url: `wss://www.mydomain.com/${path}`,
closeObserver: {
next: (event: CloseEvent) => { console.log(event) }
}
}).multiplex(() => {}, () => {}, () => true)
I am subscribing to an Observable returned by this method and printing incoming messages.
All this works fine in every browser except Google Chrome. (I tried Firefox, Opera, Chromium, Edge). Also everything works in chrome extension Smart Websocket Client. It even works fine in Chrome's incognito mode, but fails in ragular mode.
On chrome I get
WebSocket connection to 'wss://mydomain.com/something' failed.
CloseEvent that I log isn't very helpful, it just says that the code is 1006, the field reason is empty.
When I bypass the proxy and connect directly to my app with ws://www.mydomain.com:8080/something, it works fine on chrome.
I guess something is wrong in my nginx config, buy I cant really tell what. All the guides for configuring nginx for websockets say that this is how it should be configured.
I spent 2 days searching for information about this and didn't find any real answers to why this is happening and what can I do to fix it.
Does anyone have any ideas why is this happening?
UPDATE
Here is another interesting thing. I wrote a simple script that connects to my proxy, just like my Angular app, but using standard api.
let ws = new WebSocket("wss://mydomain.com/something");
ws.onmessage = (ev) => {
console.log(ev.data);
document.getElementById("result").innerHTML += ev.data
};
ws.onerror = (err) => {
console.log(err)
}
When I just open this file in chrome using file://, everything works. It connects to my ws server and prints incoming messages on screen. But when I run local Apache server and serve the same file on localhost:80, I get the same error as before in Chrome. (Other browsers and Incognito mode still work fine when the file is accessed through localhost)
So this issue doesn't have much to do with Angular.
You mentioned that in private/incognito mode it works, have you tried to disable all extensions and connect to websocket?

Nginx SSL Breaks on Posted Forms

All my GET based pages load fine, but posting forms returns a 400 response.
Here's my relevant Nginx config
server {
listen 443 ssl http2;
server_name tasks.technically.fun www.tasks.technically.fun;
ssl_certificate ssl/technically.fun/fullchain.pem;
ssl_certificate_key ssl/technically.fun/privkey.pem;
location / {
proxy_set_header Host $Host;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:5000;
}
}
The goal was to have all http endpoints get redirected to https, and then display a 400 error on unsupported domains.
The third server section should be covering all endpoints for https://tasks.technically.fun/*
When I inspect a form, say, the login form for my website, it looks like this:
Which you can see seems all correct.
I've isolated it to the proxy_set_header Connection "upgrade";, if I disable that my forms start working again.
However, this breaks my usage of SignalR, which relies on Websockets, and its my understanding I need that header for websockets.
Is best practice here to only do those three header modifications on a dedicated endpoint for where my SignalR endpoints sit?
The solution had to do with the Websocket headers I was appending so that SignalR would work.
I don't know the details on why these headers broke my forms, but I modified my SignalR config to use a dedicated route and then seperated my location sections into two, one for normal routes, and a dedicated one for the websocket endpoint to use.
location / {
proxy_set_header Host $Host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:5000;
}
location /taskHub {
proxy_set_header Host $Host;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:5000;
}
This lets my websockets continue to function on the /taskHub endpoint, but all other endpoints function as normal.
If anyone has details on why Connection "upgrade" header breaks form posting on SSL, that would be good to know and appreciated if you shared!

Insecure forms and proxy servers?

I have a web application written in Apache Tomcat 8.5 that is proxied behind NGINX. i.e. I am using NGINX to offload SSL and serve static images etc. The app has been working reliably for years.
Now, the Chrome 87 update is causing a warning "The information that you’re about to submit is not secure" on every form submission. I've gone through the code with a fine-toothed comb and I can't figure out what could be triggering it.
The user gets to NGINX on https and the certificate is valid. NGINX forwards the request to Tomcat on port 8080. See config below.
The forms are submitted on the tomcat server as HTTP. But NGINX should prevent the browser from knowing that. It's https as far as the browser knows...
All tags are written as relative links or implied to be the same URL. e.g.
<form action="/login/login.do" method="post"> or <form method="post">.
Can anyone please point out something to look for? Am I missing a header or something?
Thanks in advance
from NGINX conf.d/site.conf:
location ~ \.(do|jsp)$ {
proxy_pass http://127.0.0.1:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
Seems like there was a change in Chrome 87 to give warnings for mixed forms, so that is probably why those errors are appearing.
Perhaps there are some stray absolute links within your application which are still http, and are not being automatically converted when proxied by nginx?
If you are sure all your content is served over https, you can try enabling this header Content-Security-Policy: upgrade-insecure-requests (more info here) to force browsers to upgrade insecure connections automatically.
Had a similar issue, and in my case was the response from my app server being a redirect to a different scheme (http) than the one used by the client (https).
If it's your case as well, adding this to your location definition should do the trick. Assuming your app/app server respects this header, then it should respond with the proper scheme (https) on the Location header.
proxy_set_header X-Forwarded-Proto $scheme;
For completeness, excerpt for X-Forwarded-Proto from MDN docs:
The X-Forwarded-Proto (XFP) header is a de-facto standard header for identifying the protocol (HTTP or HTTPS) that a client used to connect to your proxy or load balancer.

NGNIX proxy breaking HTML play()

I have a strange problem with ngnix as a reverse proxy breaking the HTML DOM play() method. I have nginx running in a Docker container as a proxy for a couple web applications and a node api running in individual Docker containers.
If access the the web app that uses HTML play() proxied through nginx the sounds don't play and I get a
Uncaught (in promise) DOMException: The element has no supported sources.
error in the browser. This is Chrome but I get something similar in Safari.
When I expose port 8080 on the application container and access the page directly the sounds play, no issues. What's even more confusing is that there are four different play() statements and one works while the others don't.
For testing I created a very simplified webpage to make sure nothing else was causing this issue:
<button onclick="playRed()">Red</button>
<button onclick="playGreen()">Green</button>
<script>
var soundGreen = new Audio("./sound/Ding.mp3");
var soundRed = new Audio("./sound/Ding-ding-ding-sound.mp3");
function playRed() {
soundRed.play();
}
function playGreen() {
soundGreen.play();
}
</script>
The web applications are running in http:2.4-alpine images.
I'm using the nginx:1.13 official image with the following default.conf:
server {
listen 80 default_server;
location / {
proxy_pass http://web/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /vnode {
proxy_pass http://vnode/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /api {
proxy_pass http://api:3000/api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
The application that's breaking is on the /vnode path.
After sleeping on the problem, I realized the issue was caused by a bad path. Since this wasn't set up as a virtual server all assets needed to be fully pathed as below.
var soundGreen = new Audio("vnode/sound/Ding.mp3");
var soundRed = new Audio("vnode/sound/Ding-ding-ding-sound.mp3");
What made the problem unclear is the fact that there were no 404 errors generated even though the files were not accessible.
I still don't understand how one sound was working but it also stopped working as I experimented more. Not a good answer to that one, but it works now.

How can Nginx serve index.html while proxying POST requests using React Router's HistoryLocation

I cannot figure out how to have nginx to serve my static files with React Router's HistoryLocation configuration. The setups I've tried either prevent me from refreshing or accessing the url as a top location (with a 404 Cannot GET /...) or prevent me from submitting POST requests.
Here's my initial nginx setup (not including my mime.types file):
nginx.conf
# The maximum number of connections for Nginx is calculated by:
# max_clients = worker_processes * worker_connections
worker_processes auto;
# Process needs to run in foreground within container
daemon off;
events {
worker_connections 1024;
}
http {
# Hide nginx version information.
server_tokens off;
# Define the MIME types for files.
include /etc/nginx/mime.types;
# Update charset_types due to updated mime.types
charset_types
text/xml
text/plain
text/vnd.wap.wml
application/x-javascript
application/rss+xml
text/css
application/javascript
application/json;
# Speed up file transfers by using sendfile() to copy directly
# between descriptors rather than using read()/write().
sendfile on;
# Define upstream servers
upstream node-app {
ip_hash;
server 192.168.59.103:8000;
}
include sites-enabled/*;
}
default
server {
listen 80;
root /var/www/dist;
index index.html index.htm;
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1d;
}
location #proxy {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_redirect off;
proxy_pass http://node-app;
proxy_cache_bypass $http_upgrade;
}
location / {
try_files $uri $uri/ #proxy;
}
}
All the functionality I'm expecting of nginx as a reverse proxy is there, except it gives me the aforementioned 404 Cannot GET. After poking around for solutions, I tried to add
if (!-e $request_filename){
rewrite ^(.*)$ /index.html break;
}
in the location / block. This allows me to refresh and directly access the routes as top locations, but now I can't submit PUT/POST requests, instead getting back a 405 method not allowed. I can see the requests are not being handled properly as the configuration I added now rewrites all my requests to /index.html, and that's where my API is receiving all the requests, but I don't know how to accomplish both being able to submit my PUT/POST requests to the right resource, as well as being able to refresh and access my routes using React Router's HistoryLocation.