I'm running nginx on a linux ubuntu 12.04 on a AWS machine, and I keep getting weird "caching" (?) issues on my production server. When I deploy new .css, .html, .js code - some files update, and others don't, and I get a weird mixed behavior between them. (e.g. the app works weirdly, etc.) If I ask my users to reset their cache (locally), everything works fine. I'd like to figure out a way to not have to ask users to do that!
I have tried changing the nginx configuration settings to keep getting "304" or "not modified" responses for my static files - even though I tried turning off caching, and tried following various stackoverflow posts about how to turn caching off.
Does anyone have any thoughts on what might be the problem? My guesses so far are - maybe it's something aws specific (though I tried turning sendfile off), or one of my other settings is overriding?
I've tried..
How to prevent "304 Not Modified" in nginx?
How to clear the cache of nginx?
How to disable nginx cache
https://serverfault.com/questions/269420/disable-caching-when-serving-static-files-with-nginx-for-development
and nothing's worked.
Tried sendfile off; sendfile on; setting "no cache" as well as setting a cache and having it expire in 1s.. (And running "sudo service nginx restart" between config file changes) - but still no luck. Every time, no matter what - I keep getting "304 - file not modified"
headers; and my users
My (full) current config:
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
add_header Cache-Control no-cache;
sendfile off; # Virtualbox Issue
expires 0s;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
And inside my /sites-enabled/ folder,
upstream app_server {
server XX.XX.XX.XX:XXXX fail_timeout=0;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server;
break;
}
}
# Virtualbox & Nginx Issues
sendfile off;
# Set the cache to expire - always (no caching)
location ~* \.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|xml|html|htm)$ {
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
expires 1s;
}
Any thoughts?
Thanks so much!!
Related
Hey there,
I'm new when dealing with NIGNX servers and Linux. My HTML file is displayed but my server does not load the CSS files.
The only thing I found was this line
include /etc/nginx/mime.types;
which I include in the http block.
After that I reload my config with sudo nginx -s reload. To be sure I also executed sudo nginx -s stop and sudo nginx.
This is my whole config:
http {
include /etc/nginx/mime.types;
server {
location / {
root /data/www;
}
location ~ \.(gif|jpg|png)$ {
root /data/www/images;
}
}
}
events {}
My skeleton files are located in /data/www. In this directory there is another CSS folder.
Thank you in advance.
First of all, you're going to need to tell NGINX to have your static files to obtain a TTL (time to live) via expire headers. Locate this in your NGINX configuration file, if it isn't there. Create a new directive with location
location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {
expires 1s;
}
After this go ahead and purge your files from the server and force it to serve new files.
Set sendfile off in nginx.conf
Set expires 1s in mysite.conf
Explicitly set Cache-Control header: add_header Cache-Control no-cache;
Of course, before doing anything above. If it doesn't require drastic measure, try manually deleting everything in the cache folder: /var/cache/nginx
If that doesn't help then proceed with everything listed here!
After you've successfully purged your server from serving static files. Add this to your NGINX server block to achieve optimization.
gzip on;
gzip_comp_level 2;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/x-javascript text/xml text/css application/xml;
It's possible to set expire headers for files that don't change and are served regularly.
location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {
expires 365d;
}
I cannot figure out how to have nginx to serve my static files with React Router's HistoryLocation configuration. The setups I've tried either prevent me from refreshing or accessing the url as a top location (with a 404 Cannot GET /...) or prevent me from submitting POST requests.
Here's my initial nginx setup (not including my mime.types file):
nginx.conf
# The maximum number of connections for Nginx is calculated by:
# max_clients = worker_processes * worker_connections
worker_processes auto;
# Process needs to run in foreground within container
daemon off;
events {
worker_connections 1024;
}
http {
# Hide nginx version information.
server_tokens off;
# Define the MIME types for files.
include /etc/nginx/mime.types;
# Update charset_types due to updated mime.types
charset_types
text/xml
text/plain
text/vnd.wap.wml
application/x-javascript
application/rss+xml
text/css
application/javascript
application/json;
# Speed up file transfers by using sendfile() to copy directly
# between descriptors rather than using read()/write().
sendfile on;
# Define upstream servers
upstream node-app {
ip_hash;
server 192.168.59.103:8000;
}
include sites-enabled/*;
}
default
server {
listen 80;
root /var/www/dist;
index index.html index.htm;
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1d;
}
location #proxy {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_redirect off;
proxy_pass http://node-app;
proxy_cache_bypass $http_upgrade;
}
location / {
try_files $uri $uri/ #proxy;
}
}
All the functionality I'm expecting of nginx as a reverse proxy is there, except it gives me the aforementioned 404 Cannot GET. After poking around for solutions, I tried to add
if (!-e $request_filename){
rewrite ^(.*)$ /index.html break;
}
in the location / block. This allows me to refresh and directly access the routes as top locations, but now I can't submit PUT/POST requests, instead getting back a 405 method not allowed. I can see the requests are not being handled properly as the configuration I added now rewrites all my requests to /index.html, and that's where my API is receiving all the requests, but I don't know how to accomplish both being able to submit my PUT/POST requests to the right resource, as well as being able to refresh and access my routes using React Router's HistoryLocation.
So I've been trying to get HLS working over HTTPS. This would seem like a simple task but I've hit a roadblock.
I can get HLS streaming over HTTP with no issues as its really straight forward. However as soon as I change over to HTTPS non of my clients can seem to play it. Most posts that I've researched want to talk about encrypting the HLS content, but i don't really care. I just want to serve it.
What I've also noticed is that the .m3u8 is getting downloaded by the client, but my guess is that the chunks aren't, which is why the stream errors. Also the chrome tools for debugging done show any errors on the video object.
Here is my nginx configuration:
#
# HTTP server
#
server {
listen 80;
server_name localhost;
root /var/www/html;
index index.html index.htm;
location /hls/ {
types {
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
}
add_header Cache-Control no-cache;
try_files $uri $uri/ =404;
}
}
#
# HTTPS server
#
server {
listen 443;
server_name localhost;
root /var/www/html;
index index.html index.htm;
ssl on;
ssl_certificate /etc/nginx/ssl/lab.company.com.crt;
ssl_certificate_key /etc/nginx/ssl/lab.company.com.key;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
ssl_prefer_server_ciphers on;
location /hls/ {
types {
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
}
add_header Cache-Control no-cache;
try_files $uri $uri/ =404;
}
}
This was a configuration issue. You need to make sure you are not gzipping, and that the security certificate is valid.
I am newbie in nginx and web server technologies.
I have a django project and I am trying to use nginx + fastCGI on web server.
In my project I have urls which returns html and urls which returns JSON data.
When I tries to get JSON data nginx server always(no errors, no warnings) returns html from the main page.
Content type of response is "text/html", but should be "application/json".
There is my nginx configuration(This file is almost default settings):
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server{
listen 8080;
location / {
fastcgi_pass 127.0.0.1:8881;
include fastcgi_params;
}
location /static {
alias /home/user/xxx/templates;
}
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
I've tried to set up Apache with mod_python and I've got successful result.
What I am doing wrong? How should I properly configure nginx for getting JSON data?
Please ask me if you need more information.
Thanks in advance.
It sounds like you need to add the application/json mime type to your mime.types configuration. See these question and answers.
Running a server with 140 000 page views a day (analytics).
php-fpm processes go for about 10-12M each.
Servers got 10G ram, mysql goes for 1.2G-1.6G
Configuration looks like this:
nginx
user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
#access_log /var/log/nginx/access.log main;
access_log off;
sendfile on;
#tcp_nopush on;
keepalive_timeout 10;
client_max_body_size 20M;
server_tokens off;
include /etc/nginx/conf.d/*.conf;
}
php-fpm like this:
listen = 127.0.0.1:9000
listen.allowed_clients = 127.0.0.1
user = webadmin
group = webadmin
pm = dynamic
pm.max_children = 900
pm.start_servers = 900
pm.min_spare_servers = 200
pm.max_spare_servers = 900
pm.max_requests = 500
chdir = /
Typically the server can run just fine with 500 simultaneous users (again, real time google analytics used to get this estimate) but stall at times where users are not that many (75-100 simultaneous users).
The configuration is done by my ISP, who i trust, but i still would like to know if the configuration makes sense.
I am not saying this is the best setup however it works for us.
A few things I updated with our nginx setup are:
The worker_connections, I believe that a browser opens two connections per request so you don't technically have 1024 available connections per request you have 512 so maybe change it to 2048.
I also changed the error log file param to "info" only as you have think about write times to keep the I/O low, So I changed it from "warn" to "info".
If you want to keep the access log maybe slim down the log entires it adds.
It might be worth looking at your master nginx.conf aswell you might have configs being overwritten by this file and being set back to default.
Just two little things I did from a big list I went through, however this article is great - link