Hello i have one Linux Server with one Ip-Adress. What i want to do is i want to host 4 or more different Websites/Services on this Server, all these should have https
I know that it is possible to set some websites on different Ports, but i want this.
I read stuff about docker & ngix reverse proxy. Could someone link give me a good explanation.
Thanks
Maty
If you are trying to host 4 different domains on the same server, then
first install NGINX with these commands:
sudo apt update
sudo apt install nginx
Then,
create 4 different domain.conf files in the /etc/nginx/sites-enabled directory. ( here domain can be anything that you can remember to map the sites ).
paste the below code with some modifications:
server{
listen 80 default_server;
listen [::]:80 default_server;
server_name **domain**;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_pass http://127.0.0.1:**port**/;
proxy_redirect off;
}
}
In the above code, change the domain and port accordingly.
run sudo nginx -t and check if there's any issue raised.
restart the nginx server ( sudo service nginx restart )
Now it should be running on 4 different servers with HTTP
For HTTPS
Install certbot
apt-get update
sudo apt-get install certbot
apt-get install python-certbot-nginx ( try apt-get install python-certbot-nginx for Ubuntu 18.04 and later )
Run the following command to generate certificates with the nginx:
sudo certbot --nginx -d domain.com -d www.domain.com
Now if you look at the domain.conf files of /etc/nginx/sites-enabled , you notice the certbot added few lines.
Now restart the nginx server and boom!!!! it works.
Note : you have to purchase those domains with the domain providers.
Hope this helps you!!!
Infact I stucked very much to keep this docs with me previously...
comment if you get any errors or doubts.
I tried to setup the Varnish on the Instance Debian 10 but then something goes wrong and I tried installing some apache2 modules like
sudo a2enmod SSL
sudo a2enmod proxy
sudo a2enmod proxy_balancer
sudo a2enmod proxy_http
But after enabling these modules when I restarted Apache2 I was unable to start the apache because there was something that was blocking port 443 and causing conflict with Apache2. When I removed the port Listen 443 from ports.conf file I was able to start the apache server but only on the 8080 port and on port 80 varnish is running but on the port 443, there is a service called httpd running which I am unable to find out from which config file it is configured.
The content of the ports.conf file is
# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf
Listen 8080
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
Screenshot of the ports process running. I am unable to use port 443 and unable to stop that service httpd also. it is not linked with apache2 I tried stopping the apache but this service keeps on running.
httpd
The httpd service usually refers to the Apache webserver. However, if you install Apache on Debian via apt-get install apache2, the actual service is called apache2. This is also reflected in your netstat output.
On Red Hat based system the service is called httpd. Is it possible that you compiled an Apache server from source on that same machine? Or did you accidentally install a package that is httpd-relate?
You can perform a dpkg -l to list the installed packages, maybe you'll find it there.
Anyway, please kill the httpd process and check if there's a systemd service that contains that service name. You can go into /lib/systemd/system and perform a grep httpd *.
What about TLS in Varnish?
You shouldn't enable HTTPS on your system by using mod_ssl. You should install a TLS proxy that terminates the TLS session and then passes the plain HTTP connection to Varnish which in its turn will talk plain HTTP with Apache.
I advise you to use Hitch, it's a TLS proxy that is developed by Varnish Software engineers. It's flexible, powerful and lightweight.
To install Hitch, you can find the official packages here: https://packagecloud.io/varnishcache/hitch.
Here's the documentation you might need: https://github.com/varnish/hitch/tree/master/docs
A Varnish Developer Portal tutorial about Hitch will be available some time next week.
I am working on Ubuntu 18 and trying to render an HTML page via NGINX. Following this link I did these steps:
Created html directory using sudo mkdir -p /var/www/sample/html
Placed my Web files directory webui under the html above
Created a nginx conf file using sudo vi /etc/nginx/sites-available/sample.conf
Placed below in the sample.conf
server {
listen 80;
listen [::]:80;
root /var/www/sample/html;
index index.html index.htm index.nginx-debian.html;
server_name 123.54.67.235;
location / {
include proxy_params;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://localhost/webui/;
}
location /app {
include proxy_params;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://123.54.67.235:7000;
}
}
Created a link from it to the sites-enabled directory using sudo ln -s /etc/nginx/sites-available/sample.conf /etc/nginx/sites-enabled/
Un-commented server_names_hash_bucket_size 64;
Did sudo nginx -t. Got below message:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Did sudo systemctl restart nginx. No error came.
Now when I try to go to http://123.54.67.235 from my browser, I get nginx 500 Internal Server Error.
Not sure what's the mistake I am making as I am very new to and in-experienced with this. Can anyone suggest what's the reason for this?
UPDATE: When I go to my Nginx Error log I see below error there:
2019/05/05 05:52:51 [alert] 29779#29779: *2588 768 worker_connections are not enough while connecting to upstream, client: 123.54.67.235, server: 134.209.113.22, request: "GET /webui/webui/webui/webui/webui/webui/webui/webui/.....
Note: I am using my server's ip address in the server_name field of conf file as I do not have a domain name assigned to my server.
The proxy_pass http://localhost/webui/; statement points into the same server and generates a recursive loop by adding an endless number of /webui/ path elements. The proxy_pass directive is intended for a reverse proxy, and is used to forward requests to some other server.
To serve static content, you should use a root statement.
If the URI /foo should serve the file at /var/www/sample/html/webui/foo, use root /var/www/sample/html/webui;.
For example:
server {
...
root /var/www/sample/html/webui;
...
location / { }
location /app {
include proxy_params;
proxy_...;
proxy_pass ...;
}
}
The location / block is empty.
I'm getting the following error on my chrome console:
GET http://localhost/grunt/vendor/angular/angular.js net::ERR_CONTENT_LENGTH_MISMATCH
This only happens when a simultaneous requests are shot towards nginx e.g. when the browsers cache is empty and the whole app loads. Loading the resource above as a single requests succeeds.
Here are the headers to this requests, copied from Chrome:
Remote Address:127.0.0.1:80
Request URL:http://localhost/grunt/vendor/angular/angular.js
Request Method:GET
Status Code:200 OK
Request Headersview source
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8,de;q=0.6,pl;q=0.4,es;q=0.2,he;q=0.2,gl;q=0.2
Cache-Control:no-cache
Connection:keep-alive
Cookie:gs_u_GSN-265185-D=1783247335:2567:5000:1377697930719
Host:localhost
Pragma:no-cache
Referer:http://localhost/grunt/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.122 Safari/537.36
Response Headersview source
Accept-Ranges:bytes
Cache-Control:public, max-age=0
Connection:keep-alive
Content-Length:873444
Content-Type:application/javascript
Date:Tue, 23 Sep 2014 11:08:19 GMT
ETag:"873444-1411465226000"
Last-Modified:Tue, 23 Sep 2014 09:40:26 GMT
Server:nginx/1.6.0
the real size of the file:
$ ll vendor/angular/angular.js
-rw-rw-r-- 1 xxxx staff 873444 Aug 30 07:21 vendor/angular/angular.js
As you can see Content-Length and the real size of the file are the same, so that's weird
And the nginx configuration to this proxy:
location /grunt/ {
proxy_pass http://localhost:9000/;
}
Any ideas?
Thanks
EDIT: found more info on the error log:
2014/09/23 13:08:19 [crit] 15435#0: *8 open() "/usr/local/var/run/nginx/proxy_temp/1/00/0000000001" failed (13: Permission denied) while reading upstream, client: 127.0.0.1, server: localhost, request: "GET /grunt/vendor/angular/angular.js HTTP/1.1", upstream: "http://127.0.0.1:9000/vendor/angular/angular.js", host: "localhost", referrer: "http://localhost/grunt/"
Adding the following line to the nginx config was the only thing that fixed the net::ERR_CONTENT_LENGTH_MISMATCH error for me:
proxy_buffering off;
It seems that under pressure, nginx tried to pull angular.js from its cache and couldn't due to permission issues. Here's what solved this issue:
root#amac-2:/usr/local/var/run/nginx $ chown -R _www:admin proxy_temp
_www:admin might be different in your case, depending which user owns the nginx process. See more information on ServerFault:
https://serverfault.com/questions/534497/why-do-nginx-process-run-with-user-nobody
I tried all of the above and still couldn't get it to work. Even after resorting to chmod 777. The only thing that solved it for me was to disable caching entirely:
proxy_max_temp_file_size 0;
Whilst not a fix and no good for production use this was OK for me since I'm only using nginx as part of a local development setup.
For me the remedy were these two settings:
In the file:
/etc/nginx/nginx.conf
Add:
proxy_max_temp_file_size 0;
proxy_buffering off;
Between the lines client_max_body_size 128M; and server_names_hash_bucket_size 256;:
http {
client_max_body_size 128M;
proxy_max_temp_file_size 0;
proxy_buffering off;
server_names_hash_bucket_size 256;
ps aux | grep "nginx: worker process"
after executing above command you'll see the user through which nginx is running
eg.
www-data 25356 0.0 0.0 68576 4800 ? S 12:45 0:00 nginx: worker process
www-data 25357 0.0 0.0 68912 5060 ? S 12:45 0:00 nginx: worker process
now you have to run below command to give permission
chown -R www-data:www-data /var/lib/nginx/
Hope it will work
For us, it turned out to be that our server's rather small root (ie. /) was full.
It had mountains of logs and files from users in /home. Moving all that cruft out to another mounted drive solved things.
Just wanted to share as this can be another cause of the problem.
If somebody ran nginx as a different user in the past, ownership of cache folder may be twisted. I got
/var/cache/nginx# LANG=C ls -l proxy_temp/
total 40
drwx------ 18 nginx nginx 4096 Jul 14 2016 0
drwx------ 19 nginx nginx 4096 Jul 14 2016 1
drwx------ 19 nginx nginx 4096 Jul 14 2016 2
drwx------ 19 nginx nginx 4096 Jul 14 2016 3
drwx------ 19 nginx nginx 4096 Jul 14 2016 4
drwx------ 19 nginx nginx 4096 Jul 14 2016 5
drwx------ 19 nginx nginx 4096 Jul 14 2016 6
drwx------ 18 nginx nginx 4096 Jul 14 2016 7
drwx------ 18 nginx nginx 4096 Jul 14 2016 8
drwx------ 18 nginx nginx 4096 Jul 14 2016 9
while nginx was running as www-data. So the solution is to change ownership of nginx’s cache directory to the user nginx is running under. In the present case
/var/cache/nginx# chown -R www-data:www-data *
or, even simpler
# rm -r /var/cache/nginx/*
What worked for me was to change the proxy_temp_path to a folder with read/write permissions (777)
location / {
proxy_temp_path /data/tmp;
}
I had same issue.
Increasing the space of Directory or Folder where nginx is installed, solved the issue.
For macOS with nginx installed with homebrew, I used the following steps to track down and fix the issue.
Run nginx -h to find your error log location. Look for the following line:
-e filename : set error log file (default: /opt/homebrew/var/log/nginx/error.log)
Take your error log path and tail it to see what error it's reporting when you try to load the page.
tail -f /opt/homebrew/var/log/nginx/error.log
From that I saw that one of the lines showed a permission denied error:
open() "/opt/homebrew/var/run/nginx/proxy_temp/9/01/0000000019" failed (13: Permission denied) while reading upstream
Which means that your cached directories have incorrect permissions for the nginx user.
Stop nginx
brew services stop nginx
Delete all the temp folders (location from the permission error log line)
sudo rm -rf /opt/homebrew/var/run/nginx/*
Start nginx again
brew services start nginx
After doing this, nginx will recreate the temp folders with the correct permissions. At this point you should be good try and reload your page that was failing before.
When I tried the aforementioned solution it didn't fix the issue. I also changed the permission to write on the location but it didn't work. Then I realized I did something wrong in there. In the location to store the file, I had something like
"/storage" + fileName + ".csv"
. I was testing on the Windows environment and it was working great. But later when we moved the application to the Linux environment it stopped working. So later I had to change it to
"./storage" + fileName + ".csv"
and it started working normally.
For me, the solution was:
sudo chown -R nginx:nginx /var/cache/nginx/fastcgi_temp/
For anyone using HAProxy as proxy and getting these exact same symptoms, increasing the timeout values resolved the issue for me:
timeout connect 5000
timeout client 50000
timeout server 50000
The only thing that helped me was the following settings in nginx site .conf file:
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
For me I had the same error except on a
different folder /var/lib/nginx/.
I changed the owner to nginx by
chown -R nginx:nginx /var/lib/nginx/. That did not work.
Then I checked who owned the nginx worker process by
ps aux| grep nginx
And it was running as nginx but when I looked through the nginx.conf file; I found that the user was nginx but it did not have any group. So, I added nginx to the user nginx; it turned out like this
user nginx nginx
Now I rebooted the system and the issue was fixed. I suppose I could have just used
chown -R nginx /var/lib/nginx/
That may have worked as well. So if anyone is facing this issue; firstly go into var/log/nginx and
check where the permission error occurred.
I set up a Vagrant VirtualBox box for Debian Wheezy following this instruction.
I installed nginx and php5-fpm on this virtual machine. I can access my guest machine via 127.0.0.1:8080 from host. It can also serve php files and phpinfo() works correctly, too.
However, when I try to access a remote MySQL server from a php file, the request always times out and I get a 504 Gateway Timeout error.
I noticed the followings.
In my nginx conf file, I have this line fastcgi_pass unix:/var/run/php5-fpm.sock;.
In /etc/php5/fpm/pool.d/www.conf, I have listen = /var/run/php5-fpm.sock.
php5-fpm.sock exists in /var/run/.
If I use 127.0.0.1:9000 instead of the socket, I still get the 504 Gateway Timeout error, but I get the error immediately without any waiting.
I added proxy_read_timeout 300; in my nginx conf, but this did not solve the issue.
My Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# Every Vagrant virtual environment requires a box to build off of.
config.vm.box = "wheezy32"
config.vm.provision :shell, :path => "dev/bootstrap.sh"
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
config.vm.network :forwarded_port, guest: 80, host: 8080
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network :private_network, ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network :public_network
end
My nginx conf
server {
root /var/www/sites/mysite/public_html;
index index.html index.htm index.php;
# Make site accessible from http://localhost/
server_name localhost;
access_log /var/www/logs/mysite/mysite.access_log;
error_log /var/www/logs/mysite/mysite.error_log;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.html;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
proxy_read_timeout 300;
}
location /doc/ {
alias /usr/share/doc/;
autoindex on;
allow 127.0.0.1;
allow ::1;
deny all;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
# With php5-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
/var/www/logs/mysite/mysite.error_log
2013/06/16 23:47:27 [error] 2567#0: *23 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.0.2.2, server: localhost, request: "GET /test.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "127.0.0.1:8080"
2013/06/16 23:47:27 [error] 2567#0: *23 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: 10.0.2.2, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "127.0.0.1:8080"
Here is how I attempt to connect to the remote MySQL server.
require_once('/var/www/sites/mysite/includes/db_constants.php');
try {
$dsn = 'mysql:host=172.16.0.51;dbname=' . DB_NAME . ';charset=utf8';
$db = new PDO($dsn,DB_USER,DB_PASS);
$db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$db->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
} catch (PDOException $e) {
header('HTTP/1.1 500');
exit;
} catch (Exception $e) {
header('HTTP/1.1 500');
exit;
}
What am I missing?
As #cmur2 said, I was using the private IP to connect to the remove server, and that was why it did not work. I changed it to a public IP and now it is working correctly.