Unable to replace NGinx 495/496 Error Page - configuration

I am trying to change the site which get's returned to the client when no certificate has been sent. My config is below:
server {
error_log /tmp/error.log;
listen 443;
ssl on;
server_name router.local;
ssl_certificate /tmp/server.crt;
ssl_certificate_key /tmp/server.key;
ssl_client_certificate /tmp/ca.crt;
ssl_trusted_certificate /tmp/ca.crt;
ssl_verify_client on;
ssl_verify_depth 1;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
location /error_serve {
root /sites/error;
error_page 400 402 403 404 = /error_serve/5xx.html;
}
location / {
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_pass http://192.168.1.1/;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
error_page 400 500 502 503 504 = /error_serve/5xx.html;
error_page 495 496 = /error_serve/cert_wrong.html;
}
}
All other error pages are working, only the 495 and 496 return the standard pages.

The SSL handshake happens before any HTTP request, so there is no known location at the time. In this case nginx just picks up configuration from the first one (which is location /error_serve in your configuration).

Related

Tons of timeouts from Node.JS Express API hosted on Nginx behind Cloudflare

I have a Node.JS Express API (MySQL) hosted on Nginx behind Cloudflare (2 instances running). I'm getting a lot of 504 timeout on Roblox and upstream timed out on Nginx. I have never seen a request I sent with Postman fail. I think it happens more under load. These instances are processing processing 11M requests a week. This is hosted on a 16 core, 64 GB RAM, dedicated server with 2-3 load average
Nginx error log spams these:
upstream timed out (110: Connection timed out) while reading response header from upstream
no live upstreams while connecting to upstream
upstream prematurely closed connection while reading response header from upstream
The upstream timed out errors are the concern as they are the majority of the errors.
Generally, I don't do too much processing on the API. I have less then a dozen endpoints that are mostly simple DB selects.
Can someone direct me in the right area to resolve this? Is it my Nginx config, do I need more instances, is it my design, is it Roblox, is it Cloudflare? I read Node.js can handle this (under one instance), so I tried to adjust worker connections in Nginx which caused more no live upstream errors. I cannot wrap my head around what the bottle neck is.
Site Config
proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m;
map $http_upgrade $connection_upgrade {
default upgrade;
' ' close;
}
upstream nodejs {
# Use IP Hash for session persistence
ip_hash;
keepalive 90;
# List of Node.js application servers
server localhost:9000;
server localhost:9001;
}
# HTTP: www and non-www to https
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
# HTTPS: non-www to www
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /example/example.com.cert.pem;
ssl_certificate_key /example/example.com.key.pem;
server_name example.com;
return 301 https://www.example.com$request_uri;
}
# HTTPS: www
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /example/example.com.cert.pem;
ssl_certificate_key /example/example.com.key.pem;
server_name www.example.com;
location / {
return 301 $scheme://www.example.example$request_uri;
}
location /api {
proxy_pass https://nodejs;
proxy_cache backcache;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 90;
proxy_redirect https://nodejs https://www.example.com;
}
location /api_staging {
proxy_pass https://localhost:8000;
proxy_cache backcache;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 90;
proxy_redirect https://localhost:8000 https://www.example.com;
}
location /api_development {
proxy_pass https://localhost:7000;
proxy_cache backcache;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 90;
proxy_redirect https://localhost:7000 https://www.example.com;
}
}
Nginx Config
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 1000;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
client_max_body_size 100M;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
Cloudflare Edits
Proxied is on
Full strict SSL
All Roblox IPs are allowed through firewall

Nginx Perofrmance is too slow even I did all performance trick

I have Nginx server , which has these configs (/etc/nginx/nginx.conf)
include /etc/nginx/conf.d/modules/*.conf;
user nobody;
worker_processes auto;
#worker_rlimit_nofile 40000;
worker_rlimit_nofile 65535;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 40000;
use epoll;
multi_accept on;
epoll_events 512;
}
http {
#client_header_timeout 3000;
client_body_timeout 300;
fastcgi_read_timeout 3000;
#client_max_body_size 32m;
#fastcgi_buffers 8 128k;
#fastcgi_buffer_size 128k;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
access_log off;
tcp_nodelay on;
log_not_found off;
sendfile on;
tcp_nopush on;
# keepalive_timeout 65;
gzip on;
gzip_static on;
gzip_min_length 10240;
gzip_comp_level 1;
gzip_vary on;
gzip_disable msie6;
gzip_proxied expired no-cache no-store private auth;
gzip_types
# text/html is always compressed by HttpGzipModule
text/css
text/javascript
application/x-javascript
application/json
application/xml
application/rss+xml
application/atom+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
#client_body_timeout 10;
send_timeout 2;
keepalive_timeout 60;
# number of requests client can make over keep-alive -- for testing environment
keepalive_requests 100000;
include /etc/nginx/conf.d/*.conf;
}
I am using cpanel and this is the config of my site
server {
server_name alparslan.qsinav.com www.alparslan.qsinav.com;
listen 80;
set $CPANEL_APACHE_PROXY_IP 213.159.7.72;
listen 443 ssl;
ssl_certificate /var/cpanel/ssl/apache_tls/alparslan.qsinav.com/combined;
ssl_certificate_key /var/cpanel/ssl/apache_tls/alparslan.qsinav.com/combined;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers ALL:!ADH:+HIGH:+MEDIUM:-LOW:-EXP;
location / {
try_files $uri $uri/ /index.php?$query_string;
fastcgi_read_timeout 180;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_connect_timeout 120;
proxy_read_timeout 120;
proxy_send_timeout 120;
}
location ~* \.(ico|css|js|gif|jpeg|jpg|png|woff|ttf|otf|svg|woff2|eot)$ {
expires 1d;
access_log off;
add_header Pragma public;
add_header Cache-Control "public, max-age=86400";
}
root /home/qsinav/public_html/alparslan/public;
index index.php index.html;
location = /FPM_50x.html {
root /etc/nginx/ea-nginx/html;
}
include conf.d/server-includes/*.conf;
include conf.d/users/qsinav/*.conf;
include conf.d/users/qsinav/alparslan.qsinav.com/*.conf;
location ~ \.php7?$ {
include conf.d/includes-optional/cpanel-fastcgi.conf;
fastcgi_pass unix:/opt/cpanel/ea-php74/root/usr/var/run/php-fpm/58ea52f18cb33ca4e5a37e3fd6c39780e15caa8c.sock;
error_page 502 503 /FPM_50x.html;
}
include conf.d/includes-optional/cpanel-cgi-location.conf;
include conf.d/includes-optional/cpanel-server-parsed-location.conf;
}
the problem that I have , is when more than 80 users login to my system the system going to be very slow , and then I have this error under nginx log
2020/11/07 14:27:11 [error] 1958#1958: *627 upstream timed out (110:
Connection timed out) while reading response header from upstream,
client: 78.182.232.43, server: domain.qsinav.com, request: "GET /
HTTP/1.1", upstream:
"fastcgi://unix:/opt/cpanel/ea-php74/root/usr/var/run/php-fpm/58ea52f18cb33ca4e5a37e3fd6c39780e15caa8c.sock",
host: "domain.qsinav.com"
then the 503 Connection time out start to appear to the clients .
My server's hardware is High (62 GB For RAM ,10 core for cpu )
as I know the worst server must handle more than 10000 users at same time without any problem and my system can not even handle 80 users ,
so where the problem could be ?

Error "Request failed with status code 404"

I'm running a "universal" Nuxt project on NGINX+MYSQL+PHP Ubuntu 18.04 server. Some pages use Axios to get data from a database (JSON files created by PHP). The project is working fine on dev and production mode. Server using nginx as a reverse proxy (localhost:3000 -> localhost:80).
But after I installed HTTPS and SSL certificates (DigitalOcean manual: How To Secure Nginx with Let's Encrypt on Ubuntu 18.04) server starts to show error in production mode:
ERROR Request failed with status code 404
at createError (node_modules/axios/lib/core/createError.js:16:15)
at settle (node_modules/axios/lib/core/settle.js:18:12)
at IncomingMessage.handleStreamEnd (node_modules/axios/lib/adapters/http.js:201:11)
at IncomingMessage.emit (events.js:194:15)
at IncomingMessage.EventEmitter.emit (domain.js:441:20)
at endReadableNT (_stream_readable.js:1125:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
I tried an example of nginx configuration from Nuxt official site. But Error keep appears.
My config file /etc/nginx/site-available/web_site.com
map $sent_http_content_type $expires {
"text/html" epoch;
"text/html; charset=utf-8" epoch;
default off;
}
server {
root /var/www/html;
server_name web_site.com www.web_site.com;
gzip on;
gzip_types text/plain application/xml text/css application/javascript;
gzip_min_length 1000;
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/web_site.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/web_site.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
location /basemysql {
auth_basic "Admin Login";
auth_basic_user_file /etc/nginx/pma_pass;
}
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
}
server {
if ($host = www.web_site.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = web_site.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name web_site.com www.web_site.com;
return 404; # managed by Certbot
}
The app fully functional until you try to reload it. An error appears every time I'm trying to reload any page that has Axios.
I found the problem. Redirection from HTTP to HTTPS causing the error.
I deleted these configurations and it works fine.
server {
if ($host = www.web_site.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = web_site.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name web_site.com www.web_site.com;
return 404; # managed by Certbot
}

Nginx on Cent OS Server throws 404 not found

Nginx throw 404 not found Error on html page
my index.html page is in /var/www/html directory
and nginx default configuration is given here:
server
{
listen 80 default_server;
server_name localhost;
root /var/www/html;
index index.html index.htm;
include /etc/nginx/default.d/*.conf;
location / {
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}

Nginx | 2 Domains (1x Node App, 1x Static HTML) on one server

I am having trouble running one node app and one static page (just html) on two seperate domains at the same time. No matter what I tried the static domain gets always redirected to the node app (on port 3000)
Here are the "sites-available" files :
Node App :
server {
listen [::]:80;
listen 80;
server_name www.domain1.com domain1.com;
# and redirect to the https host (declared below)
return 301 https://domain1.com$request_uri;
}
server {
listen 443;
server_name domain1.com www.domain1.com;
ssl on;
# Use certificate and key provided by Let's Encrypt:
ssl_certificate /etc/letsencrypt/live/domain1.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain1.com/privkey.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:3000/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
And the static one :
server {
listen [::]:80;
listen 80;
#server_name www.domain2.com domain2.com;
root /var/www/html/domain2;
index index.html index.htm;
return 301 https://domain2.com$request_uri;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
root /var/www/html/domain2;
index index.html index.htm;
ssl_certificate /etc/letsencrypt/live/domain2.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain2.com/privkey.pem;
}
The default config file is empty. Any help/hint would be greatly appreciated.
It worked fine until I generated a Let's encrypt certificate for domain2, put both domains in seperate configs and removed the default.
Thank you in advance!
The problem is that you have no server_name directive in your static domain configuration. As a result, the request is always caught by your default server block, which appears to be your node app.
See for details:
How nginx processes a request
Server names
Configuring HTTPS servers