I have a working nginx configuration, however when I add an additional server{} block, one of the other server blocks breaks.
The additional server block I am adding is below:
server {
listen 172.30.170.152:80;
server_name example.com;
port_in_redirect off;
location / {
proxy_pass http://127.0.0.1:80/;
include /etc/nginx/conf.d/proxy.conf;
}
location ~* \.(jpg|jpeg|gif|png|ico|tgz|gz|rar|bz2|exe|ppt|txt|tar|mid|midi|wav|bmp|rtf|avi|html|mov|zip)$ {
root /var/www/example.com/html;
expires 90d;
}
location ~* ^.+\.(css|js)$ {
root /var/www/example.com/html;
expires 90d;
}
location ~ /\.git {
rewrite .* / permanent;
}
}
There are several other identical server blocks (other than server_name) in use within this configuration, none of which cause this issue.
The only thing special about the server {} block that breaks, is the large number of sever_names within it - I've not counted it (too many) - but a figure I've pulled out of the ether is about 170.
Other things that might help:
There are 20 server {} blocks before adding this additional one
nginx version: nginx/1.0.15
CentOS release 6.4 (Final)
Related
I'm trying to host a site (called site1) nested within an existing domain (www.gateway.com).
e.g. Instead of www.site1.com/profile, it would be www.gateway.com/site1/profile.
I have an NGINX reverse proxy that detects the /site1/ path and proxies it to some upstream machines:
location ~/site1/(.*)$ {
proxy_pass http://upstreams/$1$is_args$args;
proxy_set_header Host $host;
}
The proxy itself is working fine - it redirects all the paths correctly. However, the site's assets (e.g. JS, CSS, etc.) do not preserve the base path (www.gateway.com/site1).
e.g. It is trying to load www.gateway.com/normalize.css, when the actual asset lives at www.gateway.com/site1/normalize.css.
For reference, the HTML for site1 is sourcing assets like so:
<link href="/normalize.css" rel="stylesheet" />
I've also tried removing the leading / in the href, but this results in the asset's path including the full route (less the last fragment) - also not what is desired.
Note that site1 works fine when hosted at the root of a domain (e.g. www.gateway.com/profile).
Any insights would be helpful. Thanks!
You might already have a block that checks for static fields that is messing up your asset delivery. To me what you are doing seems fine, but you have other nginx asset code possibly either on the upstream or site1 messit up. Add a block that checks for static files. If you are only doing this for one or two sites, this is reasonable.
The code below should work until you get us more info on other asset code.
location ~/site1/(.*)$ {
location ~* \.(?:js|css|jpg|jpeg|gif|png|ico|cur|svg)$ {
alias /location/of/site1;
expires 1M;
access_log off;
sendfile on;
sendfile_max_chunk 1m;
add_header Cache-Control public;
}
location ~* {
try_files $uri #nonStatic;
}
}
location #nonStatic {
proxy_pass http://upstreams/$1$is_args$args;
proxy_set_header Host $host;
}
I'm trying to use a reverse proxy for mysql. For some reason this doesn't work (where mysql-1.example.com points to a VM with MySQL).
upstream db {
server mysql-1.example.com:3306;
}
server {
listen 3306;
server_name mysql.example.com;
location / {
proxy_pass http://db;
}
}
Is there a correct way to do this? I tried connecting via mysql, but doens't work
Make sure your config is not held within the http { } section of nginx.conf. This config should be outside of the http {}.
stream {
upstream db {
server mysql-1.example.com:3306;
}
server {
listen 3306;
proxy_pass db;
}
}
You're trying to accomplish a TCP proxy with an http proxy, which is wrong.
Nginx can do the TCP load balancing/proxy stuff but the syntax is different.
look at https://www.nginx.com/resources/admin-guide/tcp-load-balancing/ for more info
It should be possible as of nginx 1.9 using TCP reverse proxies.
You need to compile nginx with the --with-stream parameter.
Then, you can add stream block in your config like #samottenhoff said in his answer.
For more details, see https://www.nginx.com/resources/admin-guide/tcp-load-balancing/ and http://nginx.org/en/docs/stream/ngx_stream_core_module.html.
Nginx plus (paid) has proper option to do that. Another way to let the docker container access to host database directly.
Using Nginx, I'm trying to configure my server to accept all domains that point to the IP of my server, by showing them a specific website, but when accessing the www.example.com (main website), I'd show an other content.
Here's what I did so far:
server {
// Redirect www to non-www
listen 80;
server_name www.example.com;
return 301 $scheme://example.com$request_uri;
}
server {
listen 80;
server_name example.com;
// rest of the configuration
}
server {
// Catch all
listen 80 default_server;
// I also tried
// server_name _;
// Without any luck.
// Rest of the configuration
}
The problem with this configuration is that every request made to this server not being www.example.com or example.com is took under example.com server configuration, not the catch all.
I'd like to cath only www.example.com/example.com in the first two configurations, and all the others in the last configuration.
I suggest putting your server on top of the file :)
I think nginx wants default servers to be on top of -a- file.
I have really much files on my server, but there is one with a default server as first server declaration, and that works.
I'm googling a lot and found several workarounds, but you have to define every single directory.
On Apache: example.com/hi -> example.com/hi/
On nginx: example.com/hi -> Firefox can't establish a connection to the server at example.com:8888
where 8888 is what Apache is listening on (nginx's :80 -> localhost:8888)
Any ideas how to fix this and have it just forward normally like folder?
I had a similar problem with varnish and nginx (varnish on port 80 proxying to nginx listening on 8080) and needed to add "port_in_redirect off;" ... server_name_in_redirect needed to stay on so nginx knew which host it was handling.
The following should do the trick, but it needs more thought/work, because only a single location block will get used at a time:
location ~ ^(.*[^/])$ {
if (-d $document_root/$1) {
rewrite ^(.*)$ $1/ permanent;
}
}
(not tested)
You can set "server_name_in_redirect off" on your server section
server{
listen 80 default;
server_name localhost;
server_name_in_redirect off;
...
...
}
That will do the trick ;-)
HTH.
Edit: Just format.
This is the magic that works best for me:
try_files $uri $uri/ #redirect;
location #redirect {
if ($uri !~ '/$') {
return 301 $uri/$is_args$args;
}
}
The 'if' statement here is safe per: http://wiki.nginx.org/IfIsEvil
I am running Django, FastCGI, and Nginx. I am creating an api of sorts that where someone can send some data via XML which I will process and then return some status codes for each node that was sent over.
The problem is that Nginx will throw a 504 Gateway Time-out if I take too long to process the XML -- I think longer than 60 seconds.
So I would like to set up Nginx so that if any requests matching the location /api will not time out for 120 seconds. What setting will accomplish that.
What I have so far is:
# Handles all api calls
location ^~ /api/ {
proxy_read_timeout 120;
proxy_connect_timeout 120;
fastcgi_pass 127.0.0.1:8080;
}
Edit: What I have is not working :)
Proxy timeouts are well, for proxies, not for FastCGI...
The directives that affect FastCGI timeouts are client_header_timeout, client_body_timeout and send_timeout.
Edit: Considering what's found on nginx wiki, the send_timeout directive is responsible for setting general timeout of response (which was bit misleading). For FastCGI there's fastcgi_read_timeout which is affecting the FastCGI process response timeout.
For those using nginx with unicorn and rails, most likely the timeout is in your unicorn.rb file
put a large timeout in unicorn.rb
timeout 500
if you're still facing issues, try having fail_timeout=0 in your upstream in nginx and see if this fixes your issue. This is for debugging purposes and might be dangerous in a production environment.
upstream foo_server {
server 127.0.0.1:3000 fail_timeout=0;
}
In http nginx section (/etc/nginx/nginx.conf) add or modify:
keepalive_timeout 300s
In server nginx section (/etc/nginx/sites-available/your-config-file.com) add these lines:
client_max_body_size 50M;
fastcgi_buffers 8 1600k;
fastcgi_buffer_size 3200k;
fastcgi_connect_timeout 300s;
fastcgi_send_timeout 300s;
fastcgi_read_timeout 300s;
In php file in the case 127.0.0.1:9000 (/etc/php/7.X/fpm/pool.d/www.conf) modify:
request_terminate_timeout = 300
I hope help you.
If you use unicorn.
Look at top on your server. Unicorn likely is using 100% of CPU right now.
There are several reasons of this problem.
You should check your HTTP requests, some of their can be very hard.
Check unicorn's version. May be you've updated it recently, and something was broken.
In server proxy set like that
location / {
proxy_pass http://ip:80;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
}
In server php set like that
server {
client_body_timeout 120;
location = /index.php {
#include fastcgi.conf; //example
#fastcgi_pass unix:/run/php/php7.3-fpm.sock;//example veriosn
fastcgi_read_timeout 120s;
}
}