Nginx Perofrmance is too slow even I did all performance trick - mysql

I have Nginx server , which has these configs (/etc/nginx/nginx.conf)
include /etc/nginx/conf.d/modules/*.conf;
user nobody;
worker_processes auto;
#worker_rlimit_nofile 40000;
worker_rlimit_nofile 65535;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 40000;
use epoll;
multi_accept on;
epoll_events 512;
}
http {
#client_header_timeout 3000;
client_body_timeout 300;
fastcgi_read_timeout 3000;
#client_max_body_size 32m;
#fastcgi_buffers 8 128k;
#fastcgi_buffer_size 128k;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
access_log off;
tcp_nodelay on;
log_not_found off;
sendfile on;
tcp_nopush on;
# keepalive_timeout 65;
gzip on;
gzip_static on;
gzip_min_length 10240;
gzip_comp_level 1;
gzip_vary on;
gzip_disable msie6;
gzip_proxied expired no-cache no-store private auth;
gzip_types
# text/html is always compressed by HttpGzipModule
text/css
text/javascript
application/x-javascript
application/json
application/xml
application/rss+xml
application/atom+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
#client_body_timeout 10;
send_timeout 2;
keepalive_timeout 60;
# number of requests client can make over keep-alive -- for testing environment
keepalive_requests 100000;
include /etc/nginx/conf.d/*.conf;
}
I am using cpanel and this is the config of my site
server {
server_name alparslan.qsinav.com www.alparslan.qsinav.com;
listen 80;
set $CPANEL_APACHE_PROXY_IP 213.159.7.72;
listen 443 ssl;
ssl_certificate /var/cpanel/ssl/apache_tls/alparslan.qsinav.com/combined;
ssl_certificate_key /var/cpanel/ssl/apache_tls/alparslan.qsinav.com/combined;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers ALL:!ADH:+HIGH:+MEDIUM:-LOW:-EXP;
location / {
try_files $uri $uri/ /index.php?$query_string;
fastcgi_read_timeout 180;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_connect_timeout 120;
proxy_read_timeout 120;
proxy_send_timeout 120;
}
location ~* \.(ico|css|js|gif|jpeg|jpg|png|woff|ttf|otf|svg|woff2|eot)$ {
expires 1d;
access_log off;
add_header Pragma public;
add_header Cache-Control "public, max-age=86400";
}
root /home/qsinav/public_html/alparslan/public;
index index.php index.html;
location = /FPM_50x.html {
root /etc/nginx/ea-nginx/html;
}
include conf.d/server-includes/*.conf;
include conf.d/users/qsinav/*.conf;
include conf.d/users/qsinav/alparslan.qsinav.com/*.conf;
location ~ \.php7?$ {
include conf.d/includes-optional/cpanel-fastcgi.conf;
fastcgi_pass unix:/opt/cpanel/ea-php74/root/usr/var/run/php-fpm/58ea52f18cb33ca4e5a37e3fd6c39780e15caa8c.sock;
error_page 502 503 /FPM_50x.html;
}
include conf.d/includes-optional/cpanel-cgi-location.conf;
include conf.d/includes-optional/cpanel-server-parsed-location.conf;
}
the problem that I have , is when more than 80 users login to my system the system going to be very slow , and then I have this error under nginx log
2020/11/07 14:27:11 [error] 1958#1958: *627 upstream timed out (110:
Connection timed out) while reading response header from upstream,
client: 78.182.232.43, server: domain.qsinav.com, request: "GET /
HTTP/1.1", upstream:
"fastcgi://unix:/opt/cpanel/ea-php74/root/usr/var/run/php-fpm/58ea52f18cb33ca4e5a37e3fd6c39780e15caa8c.sock",
host: "domain.qsinav.com"
then the 503 Connection time out start to appear to the clients .
My server's hardware is High (62 GB For RAM ,10 core for cpu )
as I know the worst server must handle more than 10000 users at same time without any problem and my system can not even handle 80 users ,
so where the problem could be ?

Related

Tons of timeouts from Node.JS Express API hosted on Nginx behind Cloudflare

I have a Node.JS Express API (MySQL) hosted on Nginx behind Cloudflare (2 instances running). I'm getting a lot of 504 timeout on Roblox and upstream timed out on Nginx. I have never seen a request I sent with Postman fail. I think it happens more under load. These instances are processing processing 11M requests a week. This is hosted on a 16 core, 64 GB RAM, dedicated server with 2-3 load average
Nginx error log spams these:
upstream timed out (110: Connection timed out) while reading response header from upstream
no live upstreams while connecting to upstream
upstream prematurely closed connection while reading response header from upstream
The upstream timed out errors are the concern as they are the majority of the errors.
Generally, I don't do too much processing on the API. I have less then a dozen endpoints that are mostly simple DB selects.
Can someone direct me in the right area to resolve this? Is it my Nginx config, do I need more instances, is it my design, is it Roblox, is it Cloudflare? I read Node.js can handle this (under one instance), so I tried to adjust worker connections in Nginx which caused more no live upstream errors. I cannot wrap my head around what the bottle neck is.
Site Config
proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m;
map $http_upgrade $connection_upgrade {
default upgrade;
' ' close;
}
upstream nodejs {
# Use IP Hash for session persistence
ip_hash;
keepalive 90;
# List of Node.js application servers
server localhost:9000;
server localhost:9001;
}
# HTTP: www and non-www to https
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
# HTTPS: non-www to www
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /example/example.com.cert.pem;
ssl_certificate_key /example/example.com.key.pem;
server_name example.com;
return 301 https://www.example.com$request_uri;
}
# HTTPS: www
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /example/example.com.cert.pem;
ssl_certificate_key /example/example.com.key.pem;
server_name www.example.com;
location / {
return 301 $scheme://www.example.example$request_uri;
}
location /api {
proxy_pass https://nodejs;
proxy_cache backcache;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 90;
proxy_redirect https://nodejs https://www.example.com;
}
location /api_staging {
proxy_pass https://localhost:8000;
proxy_cache backcache;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 90;
proxy_redirect https://localhost:8000 https://www.example.com;
}
location /api_development {
proxy_pass https://localhost:7000;
proxy_cache backcache;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 90;
proxy_redirect https://localhost:7000 https://www.example.com;
}
}
Nginx Config
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 1000;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
client_max_body_size 100M;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
Cloudflare Edits
Proxied is on
Full strict SSL
All Roblox IPs are allowed through firewall

Blazor WebAssembly nginx server returns HTML on *.css or *.js files

I'm having a hard time figuring out why the resources such as css and js files are returned as same as the index.html:
Like in the picture, each of those GET calls return the content of index.html instead of the original content.
Meanwhile my nginx configuration looks like this:
server {
server_name <DOMAIN>;
listen 443 ssl http2;
listen [::]:443 ssl http2;
add_header X-Frame-Options "SAMEORIGIN";
#add_header X-Content-Type-Options "nosniff";
add_header X-Robots-Tag "none";
add_header X-Download-Options "noopen";
add_header X-Permitted-Cross-Domain-Policies "none";
add_header X-XSS-Protection "1;mode=block";
add_header Strict-Transport-Security "max-age=15552000; includeSubDomains";
add_header Referrer-Policy "no-referrer";
client_max_body_size 1G;
location /ん尺 {
root /var/www/<DOMAIN>;
try_files $uri $uri/ /index.html =404;
index index.html;
gzip_static on;
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_types *;
gzip_proxied no-cache no-store private expired auth;
gzip_min_length 1000;
default_type application/octet-stream;
}
include /etc/nginx/ssl.conf;
ssl_certificate /etc/letsencrypt/live/<DOMAIN>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<DOMAIN>/privkey.pem;
}
As you can see the path is not / but /ん尺 because the / path is running something else.
And at the same my index.html base is <base href="/ん尺/"> so the resources point correctly in the beginning.
Is there something wrong with my setup?

redirect http request to https on nginx server

I am running an app on a digital ocean server using ubuntu 14.04 and nginx. My app runs via gunicorn. I would like to redirect http request directly to https.
I tried
server {
# Running port
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri;
and it works on safari. But it does not work on Chrome or Firefox? Any idea what I do wrong?
I attached the entire nginx.conf file below
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
sendfile on;
gzip on;
gzip_http_version 1.1;
gzip_comp_level 5;
gzip_proxied any;
gzip_min_length 256;
gzip_vary on;
# Configuration containing list of application servers
upstream app_servers {
server 127.0.0.1:8080;
}
# Configuration for Nginx
server {
# Running port
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri;
# Settings to serve static files
location /static/ {
# Example:
# root /full/path/to/application/static/file/dir;
root /var/www/example/app/;
location ~* \.(jpg|woff|jpeg|png|gif|ico|css)$ {
expires 30d;
}
location ~* \.(js)$ {
expires 1d;
}
# we do not cache html, xml or json
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
expires -1;
# access_log logs/static.log; # I don't usually include a static log
}
location ~* \.(pdf)$ {
expires 30d;
}
}
# Serve a static file (ex. favico)
# outside /static directory
location = /favico.ico {
root /app/favico.ico;
gzip_static on;
}
}
server {
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/ssl/certs/dhparam.pem;
# Proxy connections to the application servers
# app_servers
location / {
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
proxy_pass http://app_servers;
proxy_redirect off;
# proxy_redirect http:// https://;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
First of all you should not serve anything on http. Everything should be on https, even favico.ico
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
sendfile on;
gzip on;
gzip_http_version 1.1;
gzip_comp_level 5;
gzip_proxied any;
gzip_min_length 256;
gzip_vary on;
# Configuration containing list of application servers
upstream app_servers {
server 127.0.0.1:8080;
}
# Configuration for Nginx
server {
# Running port
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/ssl/certs/dhparam.pem;
# Settings to serve static files
location /static/ {
# Example:
# root /full/path/to/application/static/file/dir;
root /var/www/example/app/;
location ~* \.(jpg|woff|jpeg|png|gif|ico|css)$ {
expires 30d;
}
location ~* \.(js)$ {
expires 1d;
}
# we do not cache html, xml or json
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
expires -1;
# access_log logs/static.log; # I don't usually include a static log
}
location ~* \.(pdf)$ {
expires 30d;
}
}
# Serve a static file (ex. favico)
# outside /static directory
location = /favico.ico {
root /app/favico.ico;
gzip_static on;
}
# Proxy connections to the application servers
# app_servers
location / {
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
proxy_pass http://app_servers;
proxy_redirect off;
# proxy_redirect http:// https://;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Next when you test in chrome or any other browser, make sure to open a Private or a Incognito window.

how to serve index page from django instead of nginx static index page option

I am trying to deploy my webapp using NGINX-gunicorn-Djngo.The problem is when I open a root url(eg. www.xyz.com) in browser it shows default welcome page of NGINX but I want to serve my index page through django using proxy_pass.
when I am opening www.xyz.com// it works fine as the url matches with location block wiht pattern "/".Please suggest how can I make nginx redirect www.xyz.com to my gunicorn server.
find below my nginx.conf
user ec2-user;
worker_processes auto;
error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
#keepalive_timeout 0;
keepalive_timeout 65;
types_hash_max_size 2048;
#gzip on;
include /etc/nginx/conf.d/*.conf;
upstream agencyhunt_server {
server unix:/home/ec2-user/xyz/xyz.sock; fail_timeout=10s;
}
server {
listen 80;
server_name www.taskuse.com;
client_max_body_size 4G;
access_log /home/ec2-user/agencyhunt/logs/nginx-access.log;
error_log /home/ec2-user/agencyhunt/logs/nginx-error.log warn;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unix:/home/ec2-user/xyz/xyz.sock;
}
error_page 404 /404.html;
location = /40x.html {
}
}

Nginx and Chrome, after 5 request on static content (50MB) Chrome say Pending

I have a server with 8 core and 32Gb of memory, I use Nginx and this is my Conf:
user www-data;
worker_processes 16;
pid /var/run/nginx.pid;
worker_rlimit_nofile 100000;
events {
worker_connections 4000;
use epoll;
multi_accept on;
}
http {
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
sendfile on;
client_max_body_size 2048m;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15 15;
# keepalive_requests 100000;
reset_timedout_connection on;
client_header_timeout 1m;
client_body_timeout 1m;
send_timeout 1m;
types_hash_max_size 2048;
server_tokens off;
gzip on;
gzip_disable "msie6";
gzip_buffers 256 4k;
gzip_comp_level 5;
# gzip_http_version 1.0;
gzip_min_length 1280;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss application/json text/javascript image/x-icon image/bmp;
gzip_vary on;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
On my website, when I see 5 movies, after, I have to wait or close and restart my Chrome for view other 5 movies...
In Chrome Network, I can see the URL but with the status: (pending) and nothing other...
I can see on Chrome (in bottom left): Waiting an available Socket
Any one have a solution?
Is my nginx.conf good?
Serioulsy, thx in advance