I'm trying to host a site (called site1) nested within an existing domain (www.gateway.com).
e.g. Instead of www.site1.com/profile, it would be www.gateway.com/site1/profile.
I have an NGINX reverse proxy that detects the /site1/ path and proxies it to some upstream machines:
location ~/site1/(.*)$ {
proxy_pass http://upstreams/$1$is_args$args;
proxy_set_header Host $host;
}
The proxy itself is working fine - it redirects all the paths correctly. However, the site's assets (e.g. JS, CSS, etc.) do not preserve the base path (www.gateway.com/site1).
e.g. It is trying to load www.gateway.com/normalize.css, when the actual asset lives at www.gateway.com/site1/normalize.css.
For reference, the HTML for site1 is sourcing assets like so:
<link href="/normalize.css" rel="stylesheet" />
I've also tried removing the leading / in the href, but this results in the asset's path including the full route (less the last fragment) - also not what is desired.
Note that site1 works fine when hosted at the root of a domain (e.g. www.gateway.com/profile).
Any insights would be helpful. Thanks!
You might already have a block that checks for static fields that is messing up your asset delivery. To me what you are doing seems fine, but you have other nginx asset code possibly either on the upstream or site1 messit up. Add a block that checks for static files. If you are only doing this for one or two sites, this is reasonable.
The code below should work until you get us more info on other asset code.
location ~/site1/(.*)$ {
location ~* \.(?:js|css|jpg|jpeg|gif|png|ico|cur|svg)$ {
alias /location/of/site1;
expires 1M;
access_log off;
sendfile on;
sendfile_max_chunk 1m;
add_header Cache-Control public;
}
location ~* {
try_files $uri #nonStatic;
}
}
location #nonStatic {
proxy_pass http://upstreams/$1$is_args$args;
proxy_set_header Host $host;
}
Related
I am having an issue with Nginx configuration. I have enabled proxy_intercept_errors and created rewrite rules to display a particular HTML page from the server directory in case if one of the 404, 502, 503 error occurs. I have tested these error codes and they are intercepted correctly and the necessary HTML page with text is displayed. But the issue is that these images somehow are not displayed in my custom HTML page. Maybe there is something wrong with file paths or Nginx configuration.
HTML file location on the server:
/var/www/html/
Images location:
/var/www/html/images/
For these aforementioned directories I have set chmod 755, but for files chmod 644
Nginx settings:
server {
listen 443 ssl http2;
server_name example.net;
proxy_intercept_errors on;
root /var/www/html;
error_page 404 #404;
error_page 502 #502;
error_page 503 #503;
access_log /var/log/nginx/example.net.ssl.access.log;
error_log /var/log/nginx/example.net.ssl.error.log;
ssl_certificate /etc/letsencrypt/live/example.net/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/example.net/privkey.pem;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8080;
}
location #404 {
rewrite ^(.*)$ /404.html break;
}
location #502 {
rewrite ^(.*)$ /502.html break;
}
location #503 {
rewrite ^(.*)$ /503.html break;
}
HTML file content:
<!DOCTYPE html>
<meta charset="UTF-8">
<html>
<title>404</title>
<style>
img {
display: block;
margin-left: auto;
margin-right: auto;
}
</style>
<body>
<img src="/var/www/html/images/404.jpg" alt="404 Not Found" style="width:50%;">
</body>
</html>
When I change location in Nginx directly to the image files, then they are served and displayed correctly. Problem with images not displaying only appears when html file is served.
You need to tell Nginx how to handle the /images/404.jpg URL. It does not know that it came from a previous error_page 404 exception. The request to the URL that caused the error_page 404 exception and the request to the URL for the image are essentially independent of each other.
Assuming that localhost:8080 never needs to see any URL that begins with /images/, just add a simple location block:
location /images/ {}
It will use the value of root defined in the outer block.
If localhost:8080 needs to see some URLs that begin with /images/, use a more specific rule:
location = /images/404.jpg {}
location = /images/502.jpg {}
location = /images/503.jpg {}
My contact.html file is being downloaded rather than rendered in the browser when running my site on NGINX.home.html is working properly. This is how my default (in folder sites-available) file looks like:
server {
listen 90;
listen [::]:90;
server_name example.com;
root /home/myname/www;
location / {
try_files $uri /home.html;
add_header Access-Control-Allow-Origin *;
}
location = /contact {
default_type text/html;
alias /home/myname/www/contact.html;
}
}
When I add /contact to my url on my browser, contact.html gets downloaded as unknown file format. After having done an extensive search, these are the things I've tried:
Clear the browser cache (it also happens in Edge, so clearly this isn't the issue)
In nginx.conf I commented out the default_type application/octet-stream and un-commented default_type text/html
I have checked the in mime.types file the type text/html exists.
using try_files $uri /contact.html
Any help will be appreciated!
The issue was that default_type text/html property in nginx.conf lives in http {...} block. Since my server is listening to port 90 this configuration does not apply. once I changed the port to 80 the issue was resolved.
I have a web application written in Apache Tomcat 8.5 that is proxied behind NGINX. i.e. I am using NGINX to offload SSL and serve static images etc. The app has been working reliably for years.
Now, the Chrome 87 update is causing a warning "The information that you’re about to submit is not secure" on every form submission. I've gone through the code with a fine-toothed comb and I can't figure out what could be triggering it.
The user gets to NGINX on https and the certificate is valid. NGINX forwards the request to Tomcat on port 8080. See config below.
The forms are submitted on the tomcat server as HTTP. But NGINX should prevent the browser from knowing that. It's https as far as the browser knows...
All tags are written as relative links or implied to be the same URL. e.g.
<form action="/login/login.do" method="post"> or <form method="post">.
Can anyone please point out something to look for? Am I missing a header or something?
Thanks in advance
from NGINX conf.d/site.conf:
location ~ \.(do|jsp)$ {
proxy_pass http://127.0.0.1:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
Seems like there was a change in Chrome 87 to give warnings for mixed forms, so that is probably why those errors are appearing.
Perhaps there are some stray absolute links within your application which are still http, and are not being automatically converted when proxied by nginx?
If you are sure all your content is served over https, you can try enabling this header Content-Security-Policy: upgrade-insecure-requests (more info here) to force browsers to upgrade insecure connections automatically.
Had a similar issue, and in my case was the response from my app server being a redirect to a different scheme (http) than the one used by the client (https).
If it's your case as well, adding this to your location definition should do the trick. Assuming your app/app server respects this header, then it should respond with the proper scheme (https) on the Location header.
proxy_set_header X-Forwarded-Proto $scheme;
For completeness, excerpt for X-Forwarded-Proto from MDN docs:
The X-Forwarded-Proto (XFP) header is a de-facto standard header for identifying the protocol (HTTP or HTTPS) that a client used to connect to your proxy or load balancer.
I have NGINX set up as a reverse proxy for a virtual network of docker containers running itself as a container. One of these containers serves an Angular 4 based SPA with client-side routing in HTML5 mode.
The application is mapped to location / on NGINX, so that http://server/ brings you to the SPA home screen.
server {
listen 80;
...
location / {
proxy_pass http://spa-server/;
}
location /other/ {
proxy_pass http://other/;
}
...
}
The Angular router changes the URL to http://server/home or other routes when navigating within the SPA.
However, when I try to access these URLs directly, a 404 is returned. This error originates from the spa-server, because it obviously does not have any content for these routes.
The examples I found for configuring NGINX to support this scenario always assume that the SPA's static content is served directly from NGINX and thus try_files is a viable option.
How is it possible to forward any unknown URLs to the SPA so that it can handle them itself?
The solution that works for me is to add the directives proxy_intercept_errors and error_page to the location / in NGINX:
server {
listen 80;
...
location / {
proxy_pass http://spa-server/;
proxy_intercept_errors on;
error_page 404 = /index.html;
}
location /other/ {
proxy_pass http://other/;
}
...
}
Now, NGINX will return the /index.html i.e. the SPA from the spa-server whenever an unknown URL is requested. Still, the URL is available to Angular and the router will immediately resolve it within the SPA.
Of course, now the SPA is responsible for handling "real" 404s. Fortunately, this is not a problem and a good practice within the SPA anyway.
UPDATE: Thanks to #dan
I cannot figure out how to have nginx to serve my static files with React Router's HistoryLocation configuration. The setups I've tried either prevent me from refreshing or accessing the url as a top location (with a 404 Cannot GET /...) or prevent me from submitting POST requests.
Here's my initial nginx setup (not including my mime.types file):
nginx.conf
# The maximum number of connections for Nginx is calculated by:
# max_clients = worker_processes * worker_connections
worker_processes auto;
# Process needs to run in foreground within container
daemon off;
events {
worker_connections 1024;
}
http {
# Hide nginx version information.
server_tokens off;
# Define the MIME types for files.
include /etc/nginx/mime.types;
# Update charset_types due to updated mime.types
charset_types
text/xml
text/plain
text/vnd.wap.wml
application/x-javascript
application/rss+xml
text/css
application/javascript
application/json;
# Speed up file transfers by using sendfile() to copy directly
# between descriptors rather than using read()/write().
sendfile on;
# Define upstream servers
upstream node-app {
ip_hash;
server 192.168.59.103:8000;
}
include sites-enabled/*;
}
default
server {
listen 80;
root /var/www/dist;
index index.html index.htm;
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1d;
}
location #proxy {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_redirect off;
proxy_pass http://node-app;
proxy_cache_bypass $http_upgrade;
}
location / {
try_files $uri $uri/ #proxy;
}
}
All the functionality I'm expecting of nginx as a reverse proxy is there, except it gives me the aforementioned 404 Cannot GET. After poking around for solutions, I tried to add
if (!-e $request_filename){
rewrite ^(.*)$ /index.html break;
}
in the location / block. This allows me to refresh and directly access the routes as top locations, but now I can't submit PUT/POST requests, instead getting back a 405 method not allowed. I can see the requests are not being handled properly as the configuration I added now rewrites all my requests to /index.html, and that's where my API is receiving all the requests, but I don't know how to accomplish both being able to submit my PUT/POST requests to the right resource, as well as being able to refresh and access my routes using React Router's HistoryLocation.