I'm trying to use a reverse proxy for mysql. For some reason this doesn't work (where mysql-1.example.com points to a VM with MySQL).
upstream db {
server mysql-1.example.com:3306;
}
server {
listen 3306;
server_name mysql.example.com;
location / {
proxy_pass http://db;
}
}
Is there a correct way to do this? I tried connecting via mysql, but doens't work
Make sure your config is not held within the http { } section of nginx.conf. This config should be outside of the http {}.
stream {
upstream db {
server mysql-1.example.com:3306;
}
server {
listen 3306;
proxy_pass db;
}
}
You're trying to accomplish a TCP proxy with an http proxy, which is wrong.
Nginx can do the TCP load balancing/proxy stuff but the syntax is different.
look at https://www.nginx.com/resources/admin-guide/tcp-load-balancing/ for more info
It should be possible as of nginx 1.9 using TCP reverse proxies.
You need to compile nginx with the --with-stream parameter.
Then, you can add stream block in your config like #samottenhoff said in his answer.
For more details, see https://www.nginx.com/resources/admin-guide/tcp-load-balancing/ and http://nginx.org/en/docs/stream/ngx_stream_core_module.html.
Nginx plus (paid) has proper option to do that. Another way to let the docker container access to host database directly.
Related
1. What I've tried
I want to make ocp cluster (actually, single node-all in one) like this blog
link : openshift.com/blog/revamped-openshift-all-in-one-aio-for-labs-and-fun
and I also referred to official document : Installing bare metal
So, What I have tried is like this :
(I used VirtualBox to make four vm)
- 1 bastion
- 1 dns
- 1 master
- 1 bootstrap
These vm are in the same network.
First, I made ignition file to boot master and bootstrap node.
install-config.yaml that I used :
apiVersion: v1
baseDomain: hololy-local.com
compute:
- hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 1
metadata:
name: test
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
fips: false
pullSecret: '{"auths": ...}'
sshKey: 'ssh-ed25519 AAAA...'
I only changed baseDomain, master's number of replica, pullSecret and sshKey.
After Making ignition files, I started to boot bootstrap node and master node with iso file.
bootstrap node was successfully installed, but problem happens master node.
2. Details
Before starting Master node installation, I have to set up dns. Because unlike bootstrap's installation, Master node requests domain info during installation.
ip address
dns : 192.168.56.114
master : 192.168.56.150
DNS Zone is like this :
And I started to set up master node using this parameters.
coreos.inst.install_dev=sda
coreos.inst.image_url=http://192.168.56.114/rhcos438.x86_64.raw.gz
coreos.inst.ignition_url=http://192.168.56.114/master.ign
ip=192.168.56.150::192.168.56.254:255.255.255.0:core0.hololy-local.com:enp0s3:none nameserver=192.168.56.114
Installation finished successfully, but when it boots without boot disk(.iso) Error comes out.
It seems to trying to find master configuration file in api-int.aio.hololy-local.com:22623, and It connects ip address that I wrote in the zone file.
But strangely, The connection refused continuously.
Since I set the static ip when rhcos installation, so Ping test works successfully to 192.168.56.150.
I think 22623 port was blocked. But How can I open the port before OS boot?...
I don't know how to I solve it.
Thanks.
I solved it.
The differences between installation of 3.11 and 4.x is whether LB's necessary.
In 4.x LB is necessary. so you should set up LB.
In my situation, I set LB by nginx, and the sample is like this:
stream{
upstream ocp_k8s_api {
#round-robin;
server 192.168.56.201:6443; #bootstrap
server 192.168.56.202:6443; #master1
server 192.168.56.203:6443; #master2
server 192.168.56.204:6443; #master3
}
server {
listen 6443;
proxy_pass ocp_k8s_api;
}
upstream ocp_m_config {
#round-robin;
server 192.168.56.201:22623; #bootstrap
server 192.168.56.202:22623; #master1
server 192.168.56.203:22623; #master2
server 192.168.56.204:22623; #master3
}
server {
listen 22623;
proxy_pass ocp_m_config;
}
upstream ocp_http {
#round-robin;
server 192.168.56.205:80; #worker1
server 192.168.56.206:80; #worker2
}
server{
listen 80;
proxy_pass ocp_http;
}
upstream ocp_https {
#round-robin;
server 192.168.56.205:443; #worker1
server 192.168.56.206:443; #worker2
}
server{
listen 443;
proxy_pass ocp_https;
}
}
thanks.
I have NGINX set up as a reverse proxy for a virtual network of docker containers running itself as a container. One of these containers serves an Angular 4 based SPA with client-side routing in HTML5 mode.
The application is mapped to location / on NGINX, so that http://server/ brings you to the SPA home screen.
server {
listen 80;
...
location / {
proxy_pass http://spa-server/;
}
location /other/ {
proxy_pass http://other/;
}
...
}
The Angular router changes the URL to http://server/home or other routes when navigating within the SPA.
However, when I try to access these URLs directly, a 404 is returned. This error originates from the spa-server, because it obviously does not have any content for these routes.
The examples I found for configuring NGINX to support this scenario always assume that the SPA's static content is served directly from NGINX and thus try_files is a viable option.
How is it possible to forward any unknown URLs to the SPA so that it can handle them itself?
The solution that works for me is to add the directives proxy_intercept_errors and error_page to the location / in NGINX:
server {
listen 80;
...
location / {
proxy_pass http://spa-server/;
proxy_intercept_errors on;
error_page 404 = /index.html;
}
location /other/ {
proxy_pass http://other/;
}
...
}
Now, NGINX will return the /index.html i.e. the SPA from the spa-server whenever an unknown URL is requested. Still, the URL is available to Angular and the router will immediately resolve it within the SPA.
Of course, now the SPA is responsible for handling "real" 404s. Fortunately, this is not a problem and a good practice within the SPA anyway.
UPDATE: Thanks to #dan
Using Nginx, I'm trying to configure my server to accept all domains that point to the IP of my server, by showing them a specific website, but when accessing the www.example.com (main website), I'd show an other content.
Here's what I did so far:
server {
// Redirect www to non-www
listen 80;
server_name www.example.com;
return 301 $scheme://example.com$request_uri;
}
server {
listen 80;
server_name example.com;
// rest of the configuration
}
server {
// Catch all
listen 80 default_server;
// I also tried
// server_name _;
// Without any luck.
// Rest of the configuration
}
The problem with this configuration is that every request made to this server not being www.example.com or example.com is took under example.com server configuration, not the catch all.
I'd like to cath only www.example.com/example.com in the first two configurations, and all the others in the last configuration.
I suggest putting your server on top of the file :)
I think nginx wants default servers to be on top of -a- file.
I have really much files on my server, but there is one with a default server as first server declaration, and that works.
I'm googling a lot and found several workarounds, but you have to define every single directory.
On Apache: example.com/hi -> example.com/hi/
On nginx: example.com/hi -> Firefox can't establish a connection to the server at example.com:8888
where 8888 is what Apache is listening on (nginx's :80 -> localhost:8888)
Any ideas how to fix this and have it just forward normally like folder?
I had a similar problem with varnish and nginx (varnish on port 80 proxying to nginx listening on 8080) and needed to add "port_in_redirect off;" ... server_name_in_redirect needed to stay on so nginx knew which host it was handling.
The following should do the trick, but it needs more thought/work, because only a single location block will get used at a time:
location ~ ^(.*[^/])$ {
if (-d $document_root/$1) {
rewrite ^(.*)$ $1/ permanent;
}
}
(not tested)
You can set "server_name_in_redirect off" on your server section
server{
listen 80 default;
server_name localhost;
server_name_in_redirect off;
...
...
}
That will do the trick ;-)
HTH.
Edit: Just format.
This is the magic that works best for me:
try_files $uri $uri/ #redirect;
location #redirect {
if ($uri !~ '/$') {
return 301 $uri/$is_args$args;
}
}
The 'if' statement here is safe per: http://wiki.nginx.org/IfIsEvil
I am running Django, FastCGI, and Nginx. I am creating an api of sorts that where someone can send some data via XML which I will process and then return some status codes for each node that was sent over.
The problem is that Nginx will throw a 504 Gateway Time-out if I take too long to process the XML -- I think longer than 60 seconds.
So I would like to set up Nginx so that if any requests matching the location /api will not time out for 120 seconds. What setting will accomplish that.
What I have so far is:
# Handles all api calls
location ^~ /api/ {
proxy_read_timeout 120;
proxy_connect_timeout 120;
fastcgi_pass 127.0.0.1:8080;
}
Edit: What I have is not working :)
Proxy timeouts are well, for proxies, not for FastCGI...
The directives that affect FastCGI timeouts are client_header_timeout, client_body_timeout and send_timeout.
Edit: Considering what's found on nginx wiki, the send_timeout directive is responsible for setting general timeout of response (which was bit misleading). For FastCGI there's fastcgi_read_timeout which is affecting the FastCGI process response timeout.
For those using nginx with unicorn and rails, most likely the timeout is in your unicorn.rb file
put a large timeout in unicorn.rb
timeout 500
if you're still facing issues, try having fail_timeout=0 in your upstream in nginx and see if this fixes your issue. This is for debugging purposes and might be dangerous in a production environment.
upstream foo_server {
server 127.0.0.1:3000 fail_timeout=0;
}
In http nginx section (/etc/nginx/nginx.conf) add or modify:
keepalive_timeout 300s
In server nginx section (/etc/nginx/sites-available/your-config-file.com) add these lines:
client_max_body_size 50M;
fastcgi_buffers 8 1600k;
fastcgi_buffer_size 3200k;
fastcgi_connect_timeout 300s;
fastcgi_send_timeout 300s;
fastcgi_read_timeout 300s;
In php file in the case 127.0.0.1:9000 (/etc/php/7.X/fpm/pool.d/www.conf) modify:
request_terminate_timeout = 300
I hope help you.
If you use unicorn.
Look at top on your server. Unicorn likely is using 100% of CPU right now.
There are several reasons of this problem.
You should check your HTTP requests, some of their can be very hard.
Check unicorn's version. May be you've updated it recently, and something was broken.
In server proxy set like that
location / {
proxy_pass http://ip:80;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
}
In server php set like that
server {
client_body_timeout 120;
location = /index.php {
#include fastcgi.conf; //example
#fastcgi_pass unix:/run/php/php7.3-fpm.sock;//example veriosn
fastcgi_read_timeout 120s;
}
}