Blazor WebAssembly nginx server returns HTML on *.css or *.js files - html

I'm having a hard time figuring out why the resources such as css and js files are returned as same as the index.html:
Like in the picture, each of those GET calls return the content of index.html instead of the original content.
Meanwhile my nginx configuration looks like this:
server {
server_name <DOMAIN>;
listen 443 ssl http2;
listen [::]:443 ssl http2;
add_header X-Frame-Options "SAMEORIGIN";
#add_header X-Content-Type-Options "nosniff";
add_header X-Robots-Tag "none";
add_header X-Download-Options "noopen";
add_header X-Permitted-Cross-Domain-Policies "none";
add_header X-XSS-Protection "1;mode=block";
add_header Strict-Transport-Security "max-age=15552000; includeSubDomains";
add_header Referrer-Policy "no-referrer";
client_max_body_size 1G;
location /ん尺 {
root /var/www/<DOMAIN>;
try_files $uri $uri/ /index.html =404;
index index.html;
gzip_static on;
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_types *;
gzip_proxied no-cache no-store private expired auth;
gzip_min_length 1000;
default_type application/octet-stream;
}
include /etc/nginx/ssl.conf;
ssl_certificate /etc/letsencrypt/live/<DOMAIN>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<DOMAIN>/privkey.pem;
}
As you can see the path is not / but /ん尺 because the / path is running something else.
And at the same my index.html base is <base href="/ん尺/"> so the resources point correctly in the beginning.
Is there something wrong with my setup?

Related

Nginx Perofrmance is too slow even I did all performance trick

I have Nginx server , which has these configs (/etc/nginx/nginx.conf)
include /etc/nginx/conf.d/modules/*.conf;
user nobody;
worker_processes auto;
#worker_rlimit_nofile 40000;
worker_rlimit_nofile 65535;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 40000;
use epoll;
multi_accept on;
epoll_events 512;
}
http {
#client_header_timeout 3000;
client_body_timeout 300;
fastcgi_read_timeout 3000;
#client_max_body_size 32m;
#fastcgi_buffers 8 128k;
#fastcgi_buffer_size 128k;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
access_log off;
tcp_nodelay on;
log_not_found off;
sendfile on;
tcp_nopush on;
# keepalive_timeout 65;
gzip on;
gzip_static on;
gzip_min_length 10240;
gzip_comp_level 1;
gzip_vary on;
gzip_disable msie6;
gzip_proxied expired no-cache no-store private auth;
gzip_types
# text/html is always compressed by HttpGzipModule
text/css
text/javascript
application/x-javascript
application/json
application/xml
application/rss+xml
application/atom+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
#client_body_timeout 10;
send_timeout 2;
keepalive_timeout 60;
# number of requests client can make over keep-alive -- for testing environment
keepalive_requests 100000;
include /etc/nginx/conf.d/*.conf;
}
I am using cpanel and this is the config of my site
server {
server_name alparslan.qsinav.com www.alparslan.qsinav.com;
listen 80;
set $CPANEL_APACHE_PROXY_IP 213.159.7.72;
listen 443 ssl;
ssl_certificate /var/cpanel/ssl/apache_tls/alparslan.qsinav.com/combined;
ssl_certificate_key /var/cpanel/ssl/apache_tls/alparslan.qsinav.com/combined;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers ALL:!ADH:+HIGH:+MEDIUM:-LOW:-EXP;
location / {
try_files $uri $uri/ /index.php?$query_string;
fastcgi_read_timeout 180;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_connect_timeout 120;
proxy_read_timeout 120;
proxy_send_timeout 120;
}
location ~* \.(ico|css|js|gif|jpeg|jpg|png|woff|ttf|otf|svg|woff2|eot)$ {
expires 1d;
access_log off;
add_header Pragma public;
add_header Cache-Control "public, max-age=86400";
}
root /home/qsinav/public_html/alparslan/public;
index index.php index.html;
location = /FPM_50x.html {
root /etc/nginx/ea-nginx/html;
}
include conf.d/server-includes/*.conf;
include conf.d/users/qsinav/*.conf;
include conf.d/users/qsinav/alparslan.qsinav.com/*.conf;
location ~ \.php7?$ {
include conf.d/includes-optional/cpanel-fastcgi.conf;
fastcgi_pass unix:/opt/cpanel/ea-php74/root/usr/var/run/php-fpm/58ea52f18cb33ca4e5a37e3fd6c39780e15caa8c.sock;
error_page 502 503 /FPM_50x.html;
}
include conf.d/includes-optional/cpanel-cgi-location.conf;
include conf.d/includes-optional/cpanel-server-parsed-location.conf;
}
the problem that I have , is when more than 80 users login to my system the system going to be very slow , and then I have this error under nginx log
2020/11/07 14:27:11 [error] 1958#1958: *627 upstream timed out (110:
Connection timed out) while reading response header from upstream,
client: 78.182.232.43, server: domain.qsinav.com, request: "GET /
HTTP/1.1", upstream:
"fastcgi://unix:/opt/cpanel/ea-php74/root/usr/var/run/php-fpm/58ea52f18cb33ca4e5a37e3fd6c39780e15caa8c.sock",
host: "domain.qsinav.com"
then the 503 Connection time out start to appear to the clients .
My server's hardware is High (62 GB For RAM ,10 core for cpu )
as I know the worst server must handle more than 10000 users at same time without any problem and my system can not even handle 80 users ,
so where the problem could be ?

nginx location alias stop redirect

I have following nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8080;
server_name localhost;
index index.html index.htm;
location /docs {
alias /usr/share/nginx/html;
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
#
# Custom headers and headers various browsers *should* be OK with but aren't
#
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}
}
}
}
nginx is running in docker. Traefik acts as proxy and redirects on /docs path into the nginx container (to port 8080). Here nginx container should simply return the content (static content).
My problem is that nginx always redirects me to http://api.example.com:8080/docs/ (which is not accessible because I run nginx in docker behind traefik, thats why I need the path). I simply try to get the HTML content from the html directory under https://api.example.com/docs.
Additional output:
10.0.5.16 - example [11/Aug/2018:17:30:45 +0000] "GET /docs HTTP/1.1" 301 185 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.117 Safari/537.36"
How to just serve content under ../docs Url without this redirections, which are wrong?
To avoid the external redirect, you could use an internal rewrite from /docs to /docs/index.html.
For example:
location = /docs {
rewrite ^ /docs/index.html last;
}
location /docs {
...
}
this helped for me(break after rewrite and second location for any files):
location = /docs {
root /usr/share/nginx/html;
rewrite ^ /docs/index.html break;
}
location /docs {
root /usr/share/nginx/html;
}

Nginx Microcache Exceptions for Login

I have nginx php-fpm server for my website. I would like to use microcache for nginx. At first everything works fine. I get "hit" with curl command. Problem starts when I try to login. I tried everything but couldnt solve login problem.
I set "logged_in" cookie for 10 sec and at "cache config" I set "no-cache" for that cookie. It suppose to bypass caching while there is that cookie. I have "put" no-cache setting which is my login.
Also my website has exmple.org/?i=login so I don't now what's going on when I click login.
Main mage is cachable but login returns un-logged in main page and after refresh I became loged in user. and for logout, it logs me out but after refresh still I am loged in user. So I have no idea how to fix/bypass login process.
Server Config:
fastcgi_cache_path /usr/share/nginx/cache/fcgi levels=1:2 keys_zone=microcache:32m max_size=1024m inactive=3h;
fastcgi_cache_key $scheme$host$request_uri$request_method;
fastcgi_cache_use_stale updating error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
add_header X-Cache $upstream_cache_status;
server {
listen ip:80;
server_name example.org;
return 301 $scheme://www.example.org$request_uri;
}
server {
server_name www.example.org;
listen ip:80;
root /home/example/public_html;
index index.html index.htm index.php;
access_log /var/log/virtualmin/example.org_access_log;
error_log /var/log/virtualmin/example.org_error_log;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include /etc/nginx/example.d/cache.conf;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_cache microcache;
fastcgi_cache_key $scheme$host$request_uri$request_method;
fastcgi_cache_valid 200 301 302 30s;
#fastcgi_pass_header Set-Cookie;
#fastcgi_pass_header Cookie;
fastcgi_cache_bypass $no_cache;
fastcgi_no_cache $no_cache;
fastcgi_pass unix:/run/php/php5.6-fpm_example.sock;
fastcgi_index index.php;
include /etc/nginx/example.d/fastcgi.conf;
}
location ~* \.(jpg|jpeg|gif|css|png|js|woff|ttf|svg|ico|eot)$ {
access_log off;
log_not_found off;
expires max;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~ /\. {
access_log off;
log_not_found off;
deny all;
}
include /etc/nginx/example.d/redirect.conf;
include /etc/nginx/example.d/rewrite.conf;
}
Cache Config (as include at server config):
#Cache everything by default
set $no_cache 0;
#Don't cache POST requests
if ($request_method = POST)
{
set $no_cache 1;
}
#Don't cache if the URL contains a query string
if ($query_string != "")
{
set $no_cache 1;
}
#Don't cache the following URLs
if ($request_uri ~* "/*login*|/*ajax*|/sistem/modul/login.php")
{
set $no_cache 1;
}
#Don't cache if there is a cookie called PHPSESSID
if ($http_cookie = "Logged_in")
{
set $no_cache 1;
}
EDIT: Actually after some inspections i am pretty sure that my problem is just with phpsessid. Every connection has phpsessid and nginx cache them too or completely ignore them by directives.
If i cache phpsesid and if login first with browser with admin account everyone has get admin logged in cache :D
I need a connections phpsessid should protected.
Its like nginx should clear phpsessid cookie at first and send php to fastcgi. and return of that php from fastcgi server nginx should attach same phpsessid which cleared at first.
Or its like cache php or everything without phpsessid cookie and when servering content from cache nginx should attach untouched, official phpsessid to them.
So with that way every visiter will have unique phpsessid and have cached content, which will resolve my login problems.
I can probably clear and set phpsessid. But i dont now how to save that unique/specific phpsessid and re-set it again.
Or may be its not even possible. I came up with theory and have no idea how to do :D

how to set upgrade-insecur-requests based on nginx

I have changed my site to https,but I used the cdn of static files in the code. it can't work and the chrome console show the errors like this:
Mixed Content: The page at 'https://a.example.com/static/' was loaded over HTTPS, but requested an insecure stylesheet 'http://cdn.bootcss.com/bootstrap/3.3.5/css/bootstrap.min.css'. This request has been blocked; the content must be served over HTTPS.
I have add the add_header Content-Security-Policy upgrade-insecure-requests; in the nginx configuration file like this:
server {
listen 80;
listen 443;
server_name a.example.com;
add_header Content-Security-Policy upgrade-insecure-requests;
if ($scheme != "https") {
return 301 https://$server_name$request_uri;
#rewrite ^ https://$server_name$request_uri? permanent;
}
ssl on;
ssl_certificate /etc/nginx/ssl/example.crt;
ssl_certificate_key /etc/nginx/ssl/example.key;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
ssl_prefer_server_ciphers on;
gzip on;
gzip_proxied any;
gzip_types text/plain application/xml application/json;
client_max_body_size 8M;
access_log /var/log/nginx/example.log;
location / {
proxy_pass http://10.10.10.110:5000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
}
location ^~ /static/ {
proxy_pass http://10.10.10.110:8888;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
#proxy_set_header Content-Security-Policy upgrade-insecure-requests;
}
}
but it does't work yet! Can someone tell me how to fix this? thx :)
Be aware that upgrade-insecure-requests is not supported in all browsers, e.g. Safari and IE.
I recommend that you just replace the HTTP requests in your code. You can use // to load it relative to the protocol it is called from as per:
//cdn.bootcss.com/bootstrap/3.3.5/css/bootstrap.min.css
That means that if you are opening the web application from an HTTPS context, it will load it using the HTTPS protocol, otherwise it will use HTTP.

nginx add header to specific URL on Zend Framework

I have a zend 1.x application and would like to add a header to a specific JSON request [not to all JSON requests]. For example anything that is requesting /data.json should have the Access-Control-Allow-Origin set.
I tried this config but it is not working [I tried to add generic headers and it is working so it seems that all the required modules are installed]. How would be possible to add the header just to the /data.json request ?
location /data.json {
add_header Access-Control-Allow-Origin *;
add_header Cache-Control "public";
try_files $uri $uri/ /index.php$is_args$args;
}
# this part actually serves the zend files
## Parse all .php file in the directory
location ~ .(php|phtml)$ {
fastcgi_pass generic-fpm;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
So far I am adding the header directive in the php script that is generating the JSON response:
<?php
header('Access-Control-Allow-Origin: *');
?>
You did forget the ~ in your expression!
location ~ ^/data.json {
add_header Access-Control-Allow-Origin *;
add_header Cache-Control "public";
try_files $uri $uri/ /index.php$is_args$args;
}