I am running Django, FastCGI, and Nginx. I am creating an api of sorts that where someone can send some data via XML which I will process and then return some status codes for each node that was sent over.
The problem is that Nginx will throw a 504 Gateway Time-out if I take too long to process the XML -- I think longer than 60 seconds.
So I would like to set up Nginx so that if any requests matching the location /api will not time out for 120 seconds. What setting will accomplish that.
What I have so far is:
# Handles all api calls
location ^~ /api/ {
proxy_read_timeout 120;
proxy_connect_timeout 120;
fastcgi_pass 127.0.0.1:8080;
}
Edit: What I have is not working :)
Proxy timeouts are well, for proxies, not for FastCGI...
The directives that affect FastCGI timeouts are client_header_timeout, client_body_timeout and send_timeout.
Edit: Considering what's found on nginx wiki, the send_timeout directive is responsible for setting general timeout of response (which was bit misleading). For FastCGI there's fastcgi_read_timeout which is affecting the FastCGI process response timeout.
For those using nginx with unicorn and rails, most likely the timeout is in your unicorn.rb file
put a large timeout in unicorn.rb
timeout 500
if you're still facing issues, try having fail_timeout=0 in your upstream in nginx and see if this fixes your issue. This is for debugging purposes and might be dangerous in a production environment.
upstream foo_server {
server 127.0.0.1:3000 fail_timeout=0;
}
In http nginx section (/etc/nginx/nginx.conf) add or modify:
keepalive_timeout 300s
In server nginx section (/etc/nginx/sites-available/your-config-file.com) add these lines:
client_max_body_size 50M;
fastcgi_buffers 8 1600k;
fastcgi_buffer_size 3200k;
fastcgi_connect_timeout 300s;
fastcgi_send_timeout 300s;
fastcgi_read_timeout 300s;
In php file in the case 127.0.0.1:9000 (/etc/php/7.X/fpm/pool.d/www.conf) modify:
request_terminate_timeout = 300
I hope help you.
If you use unicorn.
Look at top on your server. Unicorn likely is using 100% of CPU right now.
There are several reasons of this problem.
You should check your HTTP requests, some of their can be very hard.
Check unicorn's version. May be you've updated it recently, and something was broken.
In server proxy set like that
location / {
proxy_pass http://ip:80;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
}
In server php set like that
server {
client_body_timeout 120;
location = /index.php {
#include fastcgi.conf; //example
#fastcgi_pass unix:/run/php/php7.3-fpm.sock;//example veriosn
fastcgi_read_timeout 120s;
}
}
Related
Ah I faced this issue today and was able to solve it from multiple places;
For anyone facing similar issue i thought ill try to help you out
the 403 error occurs because most probably your server is not listening to the request directly from the client, ie either a loadbalancer or proxy gives the request to nginx.
to solve this use
set_real_ip_from 34.117.182.58/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
under http block (note http not server)
ie
http {
set_real_ip_from 34.117.182.58/0; \n
real_ip_header X-Forwarded-For;
real_ip_recursive on;
}
this should solve the 403 error but most likely you will recieve 404 error
this is due to how nginx works to solve this add proxy_pass http://backend_server; to the location
eg:
location /admin {
allow 1.2.3.4;
deny all;
proxy_pass http://backend_server;
}
This should solve your issue :)
Hope I could be of help
I have simple setup of 3 servers (in containers) - 2 "app" servers (whoami services - so by response I can acknowledge server) and nginx server.
I've launched nginx with simple load-balancing configuration:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream myapp1 {
server w1:8000 weight=1;
server w2:8000 weight=1;
}
server {
listen 80;
location / {
proxy_pass http://myapp1/;
}
}
}
The problem is that it doesn't work in Chrome - it always loads only first server. I've tried to turn off cache in Dev console + reload via CTRL+F5 but nothing helped.
If I try to curl nginx server - I get responses in round robin manner (as expected).
Here is my containers setup:
docker network create testnw
docker run -dit --name w1 --network testnw jwilder/whoami # app1
docker run -dit --name w2 --network testnw jwilder/whoami # app2
docker run -dit --name ng --network testnw -p 8989:80 -v ${PWD}/my.conf:/etc/nginx/nginx.conf nginx # LB server
curl localhost:8989 # will get response from w1
curl localhost:8989 # will get response from w2
curl localhost:8989 # will get response from w1
...
Edit 3: Found out an interesting issue.
In chrome every time I access my website it makes two calls no matter what they are called to/of my website and /favicon.ico of my website.
I don't have a /favicon.ico.
What I think is happening
when Nginx is getting requests for/of my website, it is loading the first server upstream.
when chrome loads / from my website it also calls /favicon.ico of my website which results in making a new call to Nginx so it loads the .ico files from the next server upstream.
this happens so that servers 1,2,3 are loaded in order 1(ico file from 2),3(ico file from 1),2(ico file from 3). and cycle repeats.
once I stopped the loading of /favicon.ico in Nginx, my three upstreams servers 1,2,3 are loading in order 1,2,3 of round-robin.
I put this in the server with upstream to disable loading favicon.ico from Nginx.
location = /favicon.ico {
log_not_found off;
}
Hope anyone having this problem find this useful.
Edit 2: Figured out the issue, the load balancing is working fine with static files and static servers inside the Nginx conf file.
but my applications are being loaded by node, so had to start Nginx after starting all the node servers.
Issue reappears when I restart the application server while Nginx is running.
Now no issue will update soon
Edit 1: This is not working for me anymore, this worked yesterday, today continued working on the same configuration, the issue reappeared.
Had this same issue with my setup.
What worked for me after a lot of proxy setup and VirtualBox setup and network editing.
Add an extra server block in the HTTP block.
server{
}
and reload the Nginx service.
It worked for me, after reloading once both chrome and firefox loads the servers in the given order, I deleted the server block and it is still working.
Don't know why the issue raised in the first place.
Hope this helps to solve your issue.
I have NGINX set up as a reverse proxy for a virtual network of docker containers running itself as a container. One of these containers serves an Angular 4 based SPA with client-side routing in HTML5 mode.
The application is mapped to location / on NGINX, so that http://server/ brings you to the SPA home screen.
server {
listen 80;
...
location / {
proxy_pass http://spa-server/;
}
location /other/ {
proxy_pass http://other/;
}
...
}
The Angular router changes the URL to http://server/home or other routes when navigating within the SPA.
However, when I try to access these URLs directly, a 404 is returned. This error originates from the spa-server, because it obviously does not have any content for these routes.
The examples I found for configuring NGINX to support this scenario always assume that the SPA's static content is served directly from NGINX and thus try_files is a viable option.
How is it possible to forward any unknown URLs to the SPA so that it can handle them itself?
The solution that works for me is to add the directives proxy_intercept_errors and error_page to the location / in NGINX:
server {
listen 80;
...
location / {
proxy_pass http://spa-server/;
proxy_intercept_errors on;
error_page 404 = /index.html;
}
location /other/ {
proxy_pass http://other/;
}
...
}
Now, NGINX will return the /index.html i.e. the SPA from the spa-server whenever an unknown URL is requested. Still, the URL is available to Angular and the router will immediately resolve it within the SPA.
Of course, now the SPA is responsible for handling "real" 404s. Fortunately, this is not a problem and a good practice within the SPA anyway.
UPDATE: Thanks to #dan
I'm trying to use a reverse proxy for mysql. For some reason this doesn't work (where mysql-1.example.com points to a VM with MySQL).
upstream db {
server mysql-1.example.com:3306;
}
server {
listen 3306;
server_name mysql.example.com;
location / {
proxy_pass http://db;
}
}
Is there a correct way to do this? I tried connecting via mysql, but doens't work
Make sure your config is not held within the http { } section of nginx.conf. This config should be outside of the http {}.
stream {
upstream db {
server mysql-1.example.com:3306;
}
server {
listen 3306;
proxy_pass db;
}
}
You're trying to accomplish a TCP proxy with an http proxy, which is wrong.
Nginx can do the TCP load balancing/proxy stuff but the syntax is different.
look at https://www.nginx.com/resources/admin-guide/tcp-load-balancing/ for more info
It should be possible as of nginx 1.9 using TCP reverse proxies.
You need to compile nginx with the --with-stream parameter.
Then, you can add stream block in your config like #samottenhoff said in his answer.
For more details, see https://www.nginx.com/resources/admin-guide/tcp-load-balancing/ and http://nginx.org/en/docs/stream/ngx_stream_core_module.html.
Nginx plus (paid) has proper option to do that. Another way to let the docker container access to host database directly.
I have a Mercurial repository running on Scm-manager proxied behind Nginx. A variety of smaller repositories run fine, so the basic setup seems OK.
Additionally, this same box runs Owncloud. I've tweaked the client_max_body_size on the server to 1000M so large files can be transferred. This works, and I have a variety of large files syncing between the server and clients.
However, when I try pushing a large Mercurial repository for the first time (1007 commits vs. about 80 for the other largest on this system) I get the following:
abort: HTTP Error 413: FULL head
Everything I've read about 413 errors doesn't seem to apply. First, it recommends setting the body size, which I've stated is already at 1G. Next, this seems to apply that the header is too large, which makes sense given that it's probably trying to check 1000+ revisions in the remote repository.
Another thing I've encountered is large_client_header_buffers. I've set this to insanely huge values like "64 128k" on both the server and http levels (read something about it not working on servers) but that didn't change anything.
I also looked at the scm-manager logs but see nothing, so this seems to stop with Nginx.
Thoughts? Here is part of my Nginx server configuration:
server {
server_name thewordnerd.info;
listen 443 ssl;
ssl_certificate /etc/ssl/certs/thewordnerd.info.crt;
ssl_certificate_key /etc/ssl/private/thewordnerd.info.key;
root /srv/www/thewordnerd.info/public;
client_max_body_size 1000M;
location /scm {
proxy_pass http://127.0.0.1:8080/scm;
include /etc/nginx/proxy_params;
}
}
The problem is the header buffer of the application server, this is because of mercurial uses very big headers. You have to increase the size of the header buffer and this application server specific. In case you are using the standalone version, you have to edit the server-config.xml and increase the requestHeaderSize value.
replace:
<Set name="requestHeaderSize">16384</Set>
with:
<Set name="requestHeaderSize">32768</Set>
Source: https://groups.google.com/forum/#!topic/scmmanager/Afad4zXSx78
I had HTTP Error: 413 (Request Entity Too Large) on my attempt to push. Resolved by adding client_max_body_size 2M; to /etc/nginx/nginx.conf. Wondering if maybe 1000M doesn't exceed the client_max_body_size...