NGNIX proxy breaking HTML play() - html

I have a strange problem with ngnix as a reverse proxy breaking the HTML DOM play() method. I have nginx running in a Docker container as a proxy for a couple web applications and a node api running in individual Docker containers.
If access the the web app that uses HTML play() proxied through nginx the sounds don't play and I get a
Uncaught (in promise) DOMException: The element has no supported sources.
error in the browser. This is Chrome but I get something similar in Safari.
When I expose port 8080 on the application container and access the page directly the sounds play, no issues. What's even more confusing is that there are four different play() statements and one works while the others don't.
For testing I created a very simplified webpage to make sure nothing else was causing this issue:
<button onclick="playRed()">Red</button>
<button onclick="playGreen()">Green</button>
<script>
var soundGreen = new Audio("./sound/Ding.mp3");
var soundRed = new Audio("./sound/Ding-ding-ding-sound.mp3");
function playRed() {
soundRed.play();
}
function playGreen() {
soundGreen.play();
}
</script>
The web applications are running in http:2.4-alpine images.
I'm using the nginx:1.13 official image with the following default.conf:
server {
listen 80 default_server;
location / {
proxy_pass http://web/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /vnode {
proxy_pass http://vnode/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /api {
proxy_pass http://api:3000/api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
The application that's breaking is on the /vnode path.

After sleeping on the problem, I realized the issue was caused by a bad path. Since this wasn't set up as a virtual server all assets needed to be fully pathed as below.
var soundGreen = new Audio("vnode/sound/Ding.mp3");
var soundRed = new Audio("vnode/sound/Ding-ding-ding-sound.mp3");
What made the problem unclear is the fact that there were no 404 errors generated even though the files were not accessible.
I still don't understand how one sound was working but it also stopped working as I experimented more. Not a good answer to that one, but it works now.

Related

Implement nginx websocket using json

I'm trying to implement websocket support using json the below code is what I'm using
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
I'm not able to understand how I can implement proxy_set_header in form of json
I've tried adding in the below form
{
proxy_http_version: "1.1",
proxy_set_header: $http_upgrade,
proxy_set_header: "upgrade"
}
It shows error for invalid parameter as I've not passed the "Upgrade" and "Connection". Is there any way I can include those in json ?

Error on Google Chrome when connecting to Websocket proxy

I Have a simple back-end Kotlin application that runs a Netty server on a Google cloud virtual machine. It receives websocket connections and sends some simple messages to clients. I also have nginx server running on same machine, it listens to 443 port and redirects requests to my application (127.0.0.1:8080). Here is nginx configuration:
server {
listen 443 ssl;
server_name www.mydomain.com;
ssl_certificate /etc/nginx/certs/my-cert.crt;
ssl_certificate_key /etc/nginx/certs/my-key.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_read_timeout 86400;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
}
}
Ssl certificate in config is valid and certified by real CA.
Now I'm trying to write a simple front-end Angular app that connects to my proxy:
return webSocket({
url: `wss://www.mydomain.com/${path}`,
closeObserver: {
next: (event: CloseEvent) => { console.log(event) }
}
}).multiplex(() => {}, () => {}, () => true)
I am subscribing to an Observable returned by this method and printing incoming messages.
All this works fine in every browser except Google Chrome. (I tried Firefox, Opera, Chromium, Edge). Also everything works in chrome extension Smart Websocket Client. It even works fine in Chrome's incognito mode, but fails in ragular mode.
On chrome I get
WebSocket connection to 'wss://mydomain.com/something' failed.
CloseEvent that I log isn't very helpful, it just says that the code is 1006, the field reason is empty.
When I bypass the proxy and connect directly to my app with ws://www.mydomain.com:8080/something, it works fine on chrome.
I guess something is wrong in my nginx config, buy I cant really tell what. All the guides for configuring nginx for websockets say that this is how it should be configured.
I spent 2 days searching for information about this and didn't find any real answers to why this is happening and what can I do to fix it.
Does anyone have any ideas why is this happening?
UPDATE
Here is another interesting thing. I wrote a simple script that connects to my proxy, just like my Angular app, but using standard api.
let ws = new WebSocket("wss://mydomain.com/something");
ws.onmessage = (ev) => {
console.log(ev.data);
document.getElementById("result").innerHTML += ev.data
};
ws.onerror = (err) => {
console.log(err)
}
When I just open this file in chrome using file://, everything works. It connects to my ws server and prints incoming messages on screen. But when I run local Apache server and serve the same file on localhost:80, I get the same error as before in Chrome. (Other browsers and Incognito mode still work fine when the file is accessed through localhost)
So this issue doesn't have much to do with Angular.
You mentioned that in private/incognito mode it works, have you tried to disable all extensions and connect to websocket?

Nginx SSL Breaks on Posted Forms

All my GET based pages load fine, but posting forms returns a 400 response.
Here's my relevant Nginx config
server {
listen 443 ssl http2;
server_name tasks.technically.fun www.tasks.technically.fun;
ssl_certificate ssl/technically.fun/fullchain.pem;
ssl_certificate_key ssl/technically.fun/privkey.pem;
location / {
proxy_set_header Host $Host;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:5000;
}
}
The goal was to have all http endpoints get redirected to https, and then display a 400 error on unsupported domains.
The third server section should be covering all endpoints for https://tasks.technically.fun/*
When I inspect a form, say, the login form for my website, it looks like this:
Which you can see seems all correct.
I've isolated it to the proxy_set_header Connection "upgrade";, if I disable that my forms start working again.
However, this breaks my usage of SignalR, which relies on Websockets, and its my understanding I need that header for websockets.
Is best practice here to only do those three header modifications on a dedicated endpoint for where my SignalR endpoints sit?
The solution had to do with the Websocket headers I was appending so that SignalR would work.
I don't know the details on why these headers broke my forms, but I modified my SignalR config to use a dedicated route and then seperated my location sections into two, one for normal routes, and a dedicated one for the websocket endpoint to use.
location / {
proxy_set_header Host $Host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:5000;
}
location /taskHub {
proxy_set_header Host $Host;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:5000;
}
This lets my websockets continue to function on the /taskHub endpoint, but all other endpoints function as normal.
If anyone has details on why Connection "upgrade" header breaks form posting on SSL, that would be good to know and appreciated if you shared!

NET::ERR_CERT_AUTHORITY_INVALID in Chrome not incognito and Firefox locally with valid certs on nginx

A couple of weeks ago we implemented the SameSite cookie policy to our cookies. If I want to develop locally, I needed a certificate to get the cookies.
We're running a Node express server and that is reversed proxied to an nginx configuration where we add the cert.
# Server configuration
#
server {
listen 443;
server_name test-local.ad.ourdomain.com;
ssl_certificate /home/myname/.certs/ourcert.crt;
ssl_certificate_key /home/myname/.certs/ourkey.rsa;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:9090;
proxy_read_timeout 90;
proxy_redirect http://localhost:9090 https://test-local.ad.ourdomain.com;
}
}
Now to the wierd part. We updated to Chrome 80 today, and all of a sudden I got an HSTS issue. I was unable to access site even if I wanted to (no opt in possibility). I tried to clear that inside chrome://internals/#hsts, and that worked. However, I still get NET::ERR_CERT_AUTHORITY_INVALID but I now have the opt in alternative.
Accessing it from Chrome Incognito mode works like a charm, no issues there. Same with Firefox, no issues there either. It says Certificate is Valid, green and pretty. Checked here as well: https://www.sslshopper.com/certificate-decoder.html and its 100% green.
I'm running Ubuntu 19.10 using Regolith.
My colleagues are using same cert, also Chrome 80, but they're running Mac, no issues there in Chrome.
Any idea? I tried to clear Browser settings, no change.
I have some great news!
We're using the same cert on our cloud dev environments (however, they are in pfx form). Locally I run Linux as mentioned, and I had to convert the pfx to a RSA file and a CRT file.
I entered our dev domain on this site: https://whatsmychaincert.com/ and it downloaded a *.chain.crt file. Together with my old crt file, and this command:
cat example.com.crt example.com.chain.crt > example.com.chained.crt
In Nginx I then referenced the .chained.crt file.
Now Chrome accepts my local, secure webpage.
We had the same issue and fixed it following petur 's solution.

How can Nginx serve index.html while proxying POST requests using React Router's HistoryLocation

I cannot figure out how to have nginx to serve my static files with React Router's HistoryLocation configuration. The setups I've tried either prevent me from refreshing or accessing the url as a top location (with a 404 Cannot GET /...) or prevent me from submitting POST requests.
Here's my initial nginx setup (not including my mime.types file):
nginx.conf
# The maximum number of connections for Nginx is calculated by:
# max_clients = worker_processes * worker_connections
worker_processes auto;
# Process needs to run in foreground within container
daemon off;
events {
worker_connections 1024;
}
http {
# Hide nginx version information.
server_tokens off;
# Define the MIME types for files.
include /etc/nginx/mime.types;
# Update charset_types due to updated mime.types
charset_types
text/xml
text/plain
text/vnd.wap.wml
application/x-javascript
application/rss+xml
text/css
application/javascript
application/json;
# Speed up file transfers by using sendfile() to copy directly
# between descriptors rather than using read()/write().
sendfile on;
# Define upstream servers
upstream node-app {
ip_hash;
server 192.168.59.103:8000;
}
include sites-enabled/*;
}
default
server {
listen 80;
root /var/www/dist;
index index.html index.htm;
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1d;
}
location #proxy {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_redirect off;
proxy_pass http://node-app;
proxy_cache_bypass $http_upgrade;
}
location / {
try_files $uri $uri/ #proxy;
}
}
All the functionality I'm expecting of nginx as a reverse proxy is there, except it gives me the aforementioned 404 Cannot GET. After poking around for solutions, I tried to add
if (!-e $request_filename){
rewrite ^(.*)$ /index.html break;
}
in the location / block. This allows me to refresh and directly access the routes as top locations, but now I can't submit PUT/POST requests, instead getting back a 405 method not allowed. I can see the requests are not being handled properly as the configuration I added now rewrites all my requests to /index.html, and that's where my API is receiving all the requests, but I don't know how to accomplish both being able to submit my PUT/POST requests to the right resource, as well as being able to refresh and access my routes using React Router's HistoryLocation.