I have configured Varnish and the configuration works fine when the host is specified with an ip address. But Varnish fails to start when the host is specified as a domain name (Say www.google.com).
This works. Varnish starts successfully and functionally everything is fine.
backend default {
.host = "74.125.225.14";
.port = "80";
}
This does NOT work, Varnish fails to start:
backend default {
.host = "www.google.com";
.port = "80";
}
Any idea on what could be wrong?
I do not have the logs right now, but will update the question once I have access to them.
To my knowledge, Varnish does not support hostnames. Instead what we do is configure Apache (or some other proxy that does support hostnames) to listen on another local port and send Varnish traffic through that proxy.
Related
I'm currently building a monitoring system with Zabbix. I'm especially trying to set up a Zabbix proxy and
As far as I know, if I set the Zabbix proxy as active, it's then not necessary for the Zabbix server to have a connection with the Zabbix proxy.
Now, what I wonder is if each Zabbix proxy must have its own static IP.
If the proxy is set to active, there is no need for them to have static IP addresses - all that is needed is a TCP connection to the server.
I have created a HTTP load balancer to basically redirect from port 80 to port 8080. The server on my instance is running on port 8080.
I can connect to the server directly but the LB is not able to connect to the instance, both accessing the LB's IP directly and also the health check always fails. The instance group the LB is using consist of just that single instance.
I read Google Compute Engine health checks failing
and the google-address-manager is running. However, when running ip route table list local there is no routing for my LB. The user in the above question is using Network load balancing and not HTTP load balancing (as I am) so I don't know if that is related?
Or perhaps it's related to a firewall? I have added my LB's ip address to a firewall rule that allows tcp:8080
Does anybode have any idea how can I fix this? I am not experienced with debian nor gcp.
Show I just try and run the route add command referenced in the above question? If so, how come the google-address-manager is not adding the route?
Thank you in advance!
You need to make sure that your port mapping on instance group is set to correct port, the 8080 in your case.
First, edit your instance group and change the port name and port to 8080:
Then, navigate to your http backend's settings and change the default port to the port name you've configured in your instance group.
Finally, make sure that your firewall rules allow access on port 8080 from 0.0.0.0/0 or at least from the IP address of HTTP load balancer (130.211.0.0/22)
I had the same issue and fixed it by adding a firewall rule for the health checker (which is not the same IP as your LB!). See https://cloud.google.com/compute/docs/load-balancing/health-checks?hl=en_US#http_and_https_load_balancing for instructions.
In my case, I did not configure the HTTP health check correctly.
I used "/" as path, but on my backend, "/" redirects to a login-page (HTTP 301), which responds with a HTTP 200.
The health check does not follow a redirect, every HTTP response code != 200 is assumed unhealthy (from Debugging Health Checks in Load Balancing on Google Compute Engine).
So, I changed my path to "/login", this fixed my issue.
I would like to use a domain name to point to a web page on the local server's IP address. However, the problem is that the page is linked to an IP address set up on port 8088 rather than 80 because the latter is already used by another web page. By the domain company I was told that they cannot do it because the domain can only point to an IP address set up on port 80. So now I am in a deadlock. What alternatives do I have and how can I make a domain pointing to the IP:8088?
Thanks
The domain company that you talked to may have done a poor job of explaining how domains work. Domain names don't refer to specific ports. They just refer to IP addresses. The client can look up a hostname to get the IP address which the client should connect to, but the client has to figure out the port without the help of DNS. Port 80 is just the default port for HTTP service.
You can certainly run a web server on port 8088 if you like. The port number would have to appear in the URL, e.g. http://somehost.example.com:8080/some/page. Clients would parse this and know to connect to port 8080 instead of the default port 80.
If you don't want URLs to contain the port number, then requests are going to go to the default port 80, and you have no choice but to make the web server running on port 80 handle these requests. HTTP/1.1 requests include the hostname which the client wants to contact, and modern web server programs are normally capable of serving completely different sets of content based on the hostname in the request. There are few ways todo what you need:
Just configure the web server for port 80 to handle both sites. This will depend on what web server software you're using. Apache for example calls these "virtual hosts", and here is a set of examples. This is a typical solution, and some people run hundreds of sites on the same server this way.
Run your two web servers as you planned. Set up the server for port 80 to be a reverse proxy for the second website. The server would continue to serve content for the site it handles now. When it receives a request for the second site, it would relay the request to the server running on port 8088, and relay the server's response back to the client.
Move the existing server for port 80 to a different port. Run a pure reverse proxy server on port 80, relaying requests for both web sites to their respective web servers.
You might be better off taking further questions to https://webmasters.stackexchange.com/ or https://serverfault.com/.
You can use a Proxy to reroute the given domain to the IP:PORT. To accomplish this you could either spin up a Nginx server and configure it as your reverse proxy or use this project that does exactly what you want and with almost no config https://github.com/cristianoliveira/ergo
If you run Apache on port 80, which is the most common case then the easiest way to solve this issue is to set a VirtualHost that uses ProxyPass.
<VirtualHost sub.domain.com:80>
ProxyPass / https://ip-or-domain.com:8088/
</VirtualHost>
I am looking to deploy an nginx/dns server on a vps proxy that maps to the real back-end in a different geographical location. The back-end runs apache,mysql,dovecot,postfix. It is a pay-for mail server. The users get entered through apache through php into mysql, and when users set up IMAP, dovecot/postfix pools them from mysql and delivers or uses the smtp outbound.
I read about something in the nginx.conf file, that I can declare the mail hostname on the proxy as so:
mail {
server_name mail.example.com;
...
}
This mail.example.com is the actual mx for the example.com mail exchanger listed in DNS? Here is where that came from:
"As you can see, we declared the name of this server at the top of the mail context.
This is because we want each of our mail services to be addressed as mail.example.
com. Even if the actual hostname of the machine on which NGINX runs is different,
and each mail server has its own hostname, we want this proxy to be a single point
of reference for our users. This hostname will in turn be used wherever NGINX
needs to present its own name, for example, in the initial SMTP server greeting."
So from my understanding, the physical hostname of the proxy should be something else besides mail.example.com. So in DNS on the proxy, I can define that as anyhost.example.com? The proxy also proxies back to my apache on the back-end.
Finally, on the back-end, how do I set up my DNS for that? What hostname do I choose for the actual box running apache,mysql,dovecot,postfix? Its all on one box. I understand that on the registrar, I point 2 nameservers, these should be two proxies, that way running a dig would only pull up the proxies and the MX which should be "known" to be on the proxy.
in your case where all of the services in one box including the proxy, you can set the apache, mysql and other services accessible only from localhost / 127.0.0.1. Then from nginx you put
upstream: 127.0.0.1:80
upstream: 127.0.0.1:3306
therefore the nginx is serving frontend request and forward them to designated services
We have a system running on Amazon's Beanstalk.
We would like to limit access to the server to HTTPS only.
When blocking HTTP on the environment settings - it prevents accessing through the beanstalk DNS.
However, if someone knows the public IP (or name) of any of the servers - he can access them directly through HTTP. It seems that the LB forwards the requests to port 80, so we can not change the security group and remove port 80.
Is there a simple way, to limit HTTP access to be only from the LB?
Thanks
You should be able to do this through EC2 Security Groups, which is an Elastic Beanstalk environment property.
By default this allows connections to port 80 from any IP address, but you could remove that rule or replace it with your own IP address (for testing purposes).
Failing that, you could reroute all HTTP traffic to HTTPS at the application level or simply test the CGI property *server_port_secure* and refuse to answer.
Yes, you need http/80 to be open for health-check to work. The option for you is to redirect all the other requests (except the health check URL) to use https - This way though the port is open, you "dont serve any data in an insecure way".
You have at least two options:
1 - set the Security Group Policy that allows access on port 80 from the Load Balancer only. IMPORTANT!!! Do not use the Load balancer IP in the instances' security group. Use the Load balancer security group ID instead.
2 - remove the public IPs from the instances. You should be good if all your EC2 instances have a private IPs and the ELB has a public IP.