HAProxy define subdomain wildcard - subdomain

I am trying to create a HAProxy script which matches certain subdomains to a specific backend.
Given the domains:
foo.x.y.z
bar.x.y.z
bar.a.b.c
baz.a.b.d.e
I want these frontends to be mapped to the backends foo, bar and baz.
I've tried to get the thing working by using hdr_beg() - but I'm missing something so it does not work :-/
This is my config so far:
frontend HttpFrontend
bind *:80
mode http
acl fooBackend hdr_beg(host) -i foo.
acl barBackend hdr_beg(host) -i bar.
default_backend bazBackend
backend bazBackend
mode http
balance leastconn
option forwardfor
server node1 10.0.1.10:80 check inter 5000 rise 3 fall 3
server node2 10.0.2.10:80 check inter 5000 rise 3 fall 3
server node3 10.0.3.10:80 check inter 5000 rise 3 fall 3
backend fooBackend
mode http
option forwardfor
server node4 10.0.1.14:80
backend barBackend
mode http
option forwardfor
server node4 10.0.1.14:80
Can you give me a hint what I am missing?!
Thanks in advance!

You need the use_backend.
frontend HttpFrontend
bind *:80
mode http
acl fooBackend hdr_beg(host) -i foo.
acl barBackend hdr_beg(host) -i bar.
use_backend fooBackend if fooBackend
use_backend barBackend if barBackend
default_backend bazBackend
<...>
Source: https://cbonte.github.io/haproxy-dconv/configuration-1.6.html#use_backend

Related

Is it possible to point a specific port from a domain name?

Basically what I want is this:
first.name.com:25565 -> 127.0.0.1:25562
second.name.com:25565 -> 127.0.0.1:25565
This is for some minecraft server's I'm hosting.
What you are looking for is Name-based virtual hosting. At the layer 4 transport, you can only redirect to different services by IP or port number, however, a number of protocols including HTTP(S) transmit the domain name used in the request and this allows a reverse proxy service such as Apache or Nginx to redirect to the actual service on the same or even a different host. Squid is normally used as a forward proxy on the client side which is not helpful in this case. What you want is a reverse HTTP(S) proxy on the server side. I am most familiar with Apache so I will present that here, but Nginx and others can do it as well. You will need the name-based virtual hosting of Apache to create a different service per hostname and then reverse proxy it to the real service behind it. As a note, you can't both have Apache running on 1234 and
Listen 10.1.1.1:1234
NameVirtualHost 10.1.1.1:1234
<VirtualHost 10.1.1.1:1234>
ServerName first.name.com
ProxyPreserveHost On
ProxyPass "/" "http://127.0.0.1:4321/"
ProxyPassReverse "/" "http://127.0.0.1:4321/"
</VirtualHost>
<VirtualHost 10.1.1.1:1234>
ServerName second.name.com:1234
ProxyPreserveHost On
ProxyPass "/" "http://127.0.0.1:1234/"
ProxyPassReverse "/" "http://127.0.0.1:1234/"
</VirtualHost>
You also need to make sure that the mod_proxy and mod_proxy_http modules are enabled for Apache. On Debian/Ubuntu, this can be done with this:
$ sudo a2enmod proxy
$ sudo a2enmod proxy_http
And the final note, you asked for the same port from the proxy, 1234, to be redirected to the local host on 127.0.0.1. Normally, I would recommend using a different port for the actual service, but you can share the port if to bind Apache to the external IP explicitly as I did in the example above using 10.1.1.1, and then bind the internal service only to 127.0.0.1. If you use the normal wildcard binding which it written as either 0.0.0.0 or *, then the two services will conflict.
Ok, so here's what I ended up doing:
mc.name.com is pointed at the server's hostname using a CNAME record
The next record I added was an SRV record to make 25565 point at 25562 (or whatever port I need it to be)
_minecraft._tcp.mc.muchieman.com SRV 900 0 5 25562 mc.muchieman.com.
900 being TLS, 0 being priority, 5 being weight, 25562 being the port to point to

how is the traffic to the openshift_cluster_hostname is redirected to the openshift web console

Question 1 :
1.1. who is sitting behind the "openshift_master_cluster_public_hostname" hostname ? is it the web console ( web console service ? or web service deployment ) or something else ?
1.2. when doing oc get service -n openshift-web-console i can see that the web console is runnung in 443 , isn't it supposed to work on port 8443 , same thing for api server , shouldn't be working on port 8443 ?
1.3. can you explain to me the flow of a request to https://openshift_master_cluster_public_hostname:8443 ?
1.4. in the documentation is
Question 2:
why i get different response for curl and wget ?
when i : curl https://openshift_master_cluster_public_hostname:8443 , i get :
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1beta1",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
...
"/swagger.json",
"/swaggerapi",
"/version",
"/version/openshift"
]
}
when i : wget https://openshift_master_cluster_public_hostname:8443 i get an index.html page.
Is the web console answering this request or the
Question 3 :
how can i do to expose the web console on port 443 rather then the 8443 , i found several solution :
using variables "openshift_master_console_port,openshift_master_api_port" but found out that these ports are ‘internal’ ports and not designed to be the public ports. So changing this ports could crash your OpenShift setup
using an external service ( described here )
I'm kind of trying to setup port forwarding on an external haporxy , is it doable ?
Answer to Q1:
1.1. Cite from the documentation Configuring Your Inventory File
This variable overrides the public host name for the cluster,
which defaults to the host name of the master. If you use an
external load balancer, specify the address of the external load balancer.
For example:
> openshift_master_cluster_public_hostname=openshift-ansible.public.example.com
This means that this Variable is the Public facing interface to the OpenShift Web-Console.
1.2 A Service is a virtual Object which connects the Service Name to the pods and is used to connect the Route Object with the Service Object. This is explained in the documentation Services. You can use almost every port for a Service because it's virtual and nothing will bind on this Port.
1.3. The answer depend on your setup. I explain it in a ha-setup with a TCP loadbalancer in front of the masters.
/> Master API 1
client -> loadbalancer -> Master API 2
\> Master API 3
The Client make a request to https://openshift_master_cluster_public_hostname:8443 the loadbalancer forwards the Client to the Master API 1 or 2 or 3 and the Client get the answer from the requested Master API Server.
api server redirect to console if request come from a browser ( https://github.com/openshift/origin/blob/release-3.11/pkg/cmd/openshift-kube-apiserver/openshiftkubeapiserver/patch_handlerchain.go#L60-L61 )
Answer to Q2:
curl and wget behaves different because they are different tools but the https request is the same.
curl behavior with wget
wget --output-document=- https://openshift_master_cluster_public_hostname:8443
wget behavior with curl
curl -o index.html https://openshift_master_cluster_public_hostname:8443
Why - is described in Usage of dash (-) in place of a filename
Answer to Q3:
You can use the OpenShift Router which you use for the apps to make the Web-Console available on 443. It's a little bit outdated but the concept is the same for the current 3.x versions Make OpenShift console available on port 443 (https) [UPDATE]

Custom Ha-proxy on Openshift

how it's possibile configure ha-proxy of router in order to make a selection on the headers of incoming calls and possibly add other outgoing headers...
I'd like to append a new header only for a call coming from particular hostname or with a specific header
You can get the default haproxy template by Obtaining the Router Configuration Template,
# oc get po
NAME READY STATUS RESTARTS AGE
router-2-40fc3 1/1 Running 0 11d
# oc rsh router-2-40fc3 cat haproxy-config.template > haproxy-config.template
and then you can customize the template, such as adding additional header through haproxy configurations. Refer Go Template Actions for router template details, and refer http-request set-header for haproxy config details.
# vim haproxy-config.template
After customizing, you should replace the template with current one through Using a ConfigMap to Replace the Router Configuration Template steps.
$ oc create configmap customrouter --from-file=haproxy-config.template
$ oc set volume dc/router --add --overwrite \
--name=config-volume \
--mount-path=/var/lib/haproxy/conf/custom \
--source='{"configMap": { "name": "customrouter"}}'
$ oc set env dc/router \
TEMPLATE_FILE=/var/lib/haproxy/conf/custom/haproxy-config.template
I hope it help you.
My config file is like this...let's assume we want to add the new header with a value...
So I just need to add the line : http-request add-header new_header_name:value?
but if I want to add the new header only for a particular hostname in input how I have to do?
*frontend public
bind :80
mode http
tcp-request inspect-delay 5s
tcp-request content accept if HTTP
monitor-uri /_______internal_router_healthz
# Strip off Proxy headers to prevent HTTpoxy (https://httpoxy.org/)
http-request del-header Proxy
# DNS labels are case insensitive (RFC 4343), we need to convert the hostname into lowercase
# before matching, or any requests containing uppercase characters will never match.
http-request set-header Host %[req.hdr(Host),lower]
# check if we need to redirect/force using https.
acl secure_redirect base,map_reg(/var/lib/haproxy/conf/os_route_http_redirect.map) -m found
redirect scheme https if secure_redirect
use_backend %[base,map_reg(/var/lib/haproxy/conf/os_http_be.map)]
default_backend openshift_default*

CakePHP 3 - Enable SSL on development server [duplicate]

OS: Ubuntu 12.04 64-bit
PHP version: 5.4.6-2~precise+1
When I test an https page I am writing through the built-in webserver (php5 -S localhost:8000), Firefox (16.0.1) says "Problem loading: The connection was interrupted", while the terminal tells me "::1:37026 Invalid request (Unsupported SSL request)".
phpinfo() tells me:
Registered Stream Socket Transports: tcp, udp, unix, udg, ssl, sslv3,
tls
[curl] SSL: Yes
SSL Version: OpenSSL/1.0.1
openssl:
OpenSSL support: enabled
OpenSSL Library Version OpenSSL 1.0.1 14 Mar 2012
OpenSSL Header Version OpenSSL 1.0.1 14 Mar 2012
Yes, http pages work just fine.
Any ideas?
See the manual section on the built-in webserver shim:
http://php.net/manual/en/features.commandline.webserver.php
It doesn't support SSL encryption. It's for plain HTTP requests. The openssl extension and function support is unrelated. It does not accept requests or send responses over the stream wrappers.
If you want SSL to run over it, try a stunnel wrapper:
php -S localhost:8000 &
stunnel3 -d 443 -r 8080
It's just for toying anyway.
It's been three years since the last update; here's how I got it working in 2021 on macOS (as an extension to mario's answer):
# Install stunnel
brew install stunnel
# Find the configuration directory
cd /usr/local/etc/stunnel
# Copy the sample conf file to actual conf file
cp stunnel.conf-sample stunnel.conf
# Edit conf
vim stunnel.conf
Modify stunnel.conf so it looks like this:
(all other options can be deleted)
; **************************************************************************
; * Global options *
; **************************************************************************
; Debugging stuff (may be useful for troubleshooting)
; Enable foreground = yes to make stunnel work with Homebrew services
foreground = yes
debug = info
output = /usr/local/var/log/stunnel.log
; **************************************************************************
; * Service definitions (remove all services for inetd mode) *
; **************************************************************************
; ***************************************** Example TLS server mode services
; TLS front-end to a web server
[https]
accept = 443
connect = 8000
cert = /usr/local/etc/stunnel/stunnel.pem
; "TIMEOUTclose = 0" is a workaround for a design flaw in Microsoft SChannel
; Microsoft implementations do not use TLS close-notify alert and thus they
; are vulnerable to truncation attacks
;TIMEOUTclose = 0
This accepts HTTPS / SSL at port 443 and connects to a local webserver running at port 8000, using stunnel's default bogus cert at /usr/local/etc/stunnel/stunnel.pem. Log level is info and log outputs are written to /usr/local/var/log/stunnel.log.
Start stunnel:
brew services start stunnel # Different for Linux
Start the webserver:
php -S localhost:8000
Now you can visit https://localhost:443 to visit your webserver: screenshot
There should be a cert error and you'll have to click through a browser warning but that gets you to the point where you can hit your localhost with HTTPS requests, for development.
I've been learning nginx and Laravel recently, and this error has came up many times. It's hard to diagnose because you need to align nginx with Laravel and also the SSL settings in your operating system at the same time (assuming you are making a self-signed cert).
If you are on Windows, it is even more difficult because you have to fight unix carriage returns when dealing with SSL certs. Sometimes you can go through the steps correctly, but you get ruined by cert validation issues. I find the trick is to make the certs in Ubuntu or Mac and email them to yourself, or use the linux subsystem.
In my case, I kept running into an issue where I declare HTTPS somewhere but php artisan serve only works on HTTP.
I just caused this Invalid request (Unsupported SSL request) error again after SSL was hooked up fine. It turned out to be that I was using Axios to make a POST request to https://. Changing it to POST http:// fixed it.
My recommendation to anyone would be to take a look at where and how HTTP/HTTPS is being used.
The textbook definition is probably something like php artisan serve only works over HTTP but requires underlying SSL layer.
Use Ngrok
Expose your server's port like so:
ngrok http <server port>
Browse with the ngrok's secure public address (the one with https).
Note: Though it works like a charm, it seems an overkill since it requires internet and would appreciate better recommendations.

HAProxy - URL Based routing with load balancing

I am new to HAProxy and I have a question about HAProxy configuration which helps me make a key decision in taking the right approach. This will greatly help me deciding the architecture.
I have 3 apps. Let's say app1, app2, app3.
Each app is differentiated by the urls as follows:
www.example.com/app1/123 -> app1
www.example.com/app2/123 -> app2
www.example.com/app3/123 -> app3
I am planning to have 2 instances of each app in 2 different regions:
Region 1 - app1, app2, app3
Region 2 - app1, app2, app3
I see 2 methods to configure this but I am not sure which is the best practice here:
Method 1: Have HAProxy1 to first differentiate the requests using the url patterns.
Requests from HAProxy1 will be routed to another HAProxy server set up individual apps (3 HAProxy servers in this case) for load balancing.
Method 2: Have one great HAProxy server which does the both as stated in method 1. That is, have configuration to segregate the requests depending on the url and then pass each request through individual filter like things set up for each app for load balancing.
I am not sure if Method 2 is supported in haproxy. Any ideas or suggested is greatly appreciated. Please put some light.
You can segregate requests based on URL and load balance with a single HAProxy server.
Your configuration will have something like this:
frontend http
acl app1 path_end -i /app1/123 #matches path ending with "/app/123"
acl app2 path_end -i /app2/123
acl app3 path_end -i /app3/123
use_backend srvs_app1 if app1
use_backend srvs_app2 if app2
use_backend srvs_app3 if app3
backend srvs_app1 #backend that lists your servers. Use a balancing algorithm as per your need.
balance roundrobin
server host1 REGION1_HOST_FOR_APP1:PORT
server host2 REGION2_HOST_FOR_APP1:PORT
backend srvs_app2
balance roundrobin
server host1 REGION1_HOST_FOR_APP2:PORT
server host2 REGION2_HOST_FOR_APP2:PORT
backend srvs_app3
balance roundrobin
server host1 REGION1_HOST_FOR_APP3:PORT
server host2 REGION2_HOST_FOR_APP3:PORT
More information can be found on the homepage.
Using acl in HAProxy to separate route for each application. You can use path_end or path_beg to match the path. Anyway, if 'd like to change request path to backend, using 'http-request set-uri' and using reg-sub pattern.
backend be_images
balance roundrobin
http-request set-uri '%[path,regsub(^/images/,/static/images,g)]'
server srv1 127.0.0.1:8001