Haproxy acl rules is not working - acl

I trying to configure haproxy-1.7.3 with HTTP2 support next acl rules:
acl rule0 hdr_beg(host) -i i0.
acl rule02 ssl_fc_alpn -i h2 and hdr_beg(host) -i i0.
use_backend i02 if rule02
use_backend i0 if rule0
acl rule1 hdr_beg(host) -i i1.
acl rule12 ssl_fc_alpn -i h2 and hdr_beg(host) -i i1.
use_backend i12 if rule12
use_backend i1 if rule1
backend i0
server node1 192.168.40.51:5000 ssl verify none
backend i02
mode tcp
http-request add-header X-Forwarded-Proto https
server node1 192.168.40.51:5001 check send-proxy
backend i1
server node1 192.168.40.23:5000 ssl verify none
backend i12
mode tcp
http-request add-header X-Forwarded-Proto https
server node1 192.168.40.23:5001 check send-proxy
I want all requests for subdomain i0. forward to i0.myserver.com and all requests for for subdomain i1. forward to i1.myserver.com with HTTP2 support.
But in my case all requests always forward to i0.myserver.com. What is wrong with this acl rules?

So, acl in tcp mode is not working for analyze headers. And working config is below:
acl rule02 ssl_fc_alpn -i h2
acl rule0 ssl_fc_sni -i i0.mydomian.com
use_backend i02 if rule02 rule0
use_backend i0 if rule0
acl rule12 ssl_fc_alpn -i h2
acl rule1 ssl_fc_sni -i i1.mydomain.com
use_backend i12 if rule12 rule1
use_backend i1 if rule1
Maybe it's will be useful for somebody.

Related

Does order, in which we write acl and use backend inside frontend ,matter when writing haproxy config file

I was writing haproxy.cfg file and for better readability wrote configurations (acl and use_backend ) related to same functionality in groups . so is it okay if I write
acl
acl
use_backend
use_backend
default
acl
use_backend
acl
use_backend
or I must write this in groups of acl and use_backend :
acl
acl
acl
acl
use_backend
use_backend
use_backend
use_backend
deafult
I tried to search it in documents but couldn't find any answer. BTW both formats are parsed successfully.

squid bind outgoing ip

I have many ips on same server and I am using squid basic authentication.
Example - I have two ips and 2 users and one single port 3128. The issue is any user can use any outgoing ips.
Below is my squid configuration:
acl http proto http
acl port_80 port 80
acl port_443 port 443
acl CONNECT method CONNECT
auth_param basic program /usr/bin/python /path/to/authenticationscript
auth_param basic realm Please enter username and password
auth_param basic credentialsttl 1 second
acl AuthUsers proxy_auth REQUIRED
external_acl_type userip %SRC %LOGIN /usr/lib/squid/ext_file_userip_acl -f /path/to/config.file
acl userip external userip
http_access allow userip
http_access deny all
http_port 3128 name=0
acl ip1 myportname 0
tcp_outgoing_address x.x.x.0 ip1
acl ip2 myportname 1
tcp_outgoing_address x.x.x.1 ip2
where x.x.x.x is the ipaddress of the server.
In the config.file I am having
x.x.x.0(ipaddress1) user1
x.x.x.1(ipaddress2) user2
How can I let one user to connect to one ip?
I found the solution.
I need to change the http_port and acl of myportname to below:
http_port 3128
acl ip1 myip x.x.x.0
tcp_outgoing_address x.x.x.0 ip1
acl ip2 myip x.x.x.1
tcp_outgoing_address x.x.x.1 ip2

Gunicorn + Web2py HTTPS

I need to run web2py with gunicorn in HTTPS ( Currently i'm running web2py with anyserver.py)
anyserver.py -s gunicorn -i 0.0.0.0 -p 8000

Squid - deny www. How?

I have a problem with Squid. I wanted to block the possibility of accessing www to selected people on the principle that I define allowed domains and block all the rest. I can not deal with this configuration. What I have done so far is a working proxy with authentication.
Can you help me to solve my problem?
Regards
acl lan src 192.168.1.0/24
# It does not work
acl TimeWorkUser1 time M T W H F A 7:00-15:00
acl User1 src 192.168.1.100
acl GoodSites dstdomain "/etc/squid/users/GoodSites.cfg"
# end
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 443 # https
acl CONNECT method CONNECT
auth_param basic program /usr/lib64/squid/basic_ncsa_auth /etc/squid/passwd
auth_param basic children 5
auth_param basic credentialsttl 8 hours
auth_param basic realm Proxy: Wymagana autoryzacja
acl ncsa_users proxy_auth REQUIRED
http_access allow ncsa_users
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
# It does not work
http_access deny User1 !GoodSites
http_access allow TimeWorkUser1
# end
http_access allow localhost
http_access allow lan
http_access deny all
http_port 3128
coredump_dir /var/spool/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
# cat /etc/squid/users/GoodSites.cfg
www.somedomain.com
somedomain.com

Cannot connect to real site ssl errror when using SSL bump

In our office, we are using squid to restrict users to connect only particular web sites and urls. If a user is connecting a web page via https, url_regex acl will not work. In a https request, we have control over domain only. But we need to restrict on url level. So, we used ssl bump to intercept the https requests. Its working fine, but we got some ssl warnings in browser.
Is this possible to intercept a ssl connection in bump without any browser warnings?
squid configuration file
#
# Recommended minimum configuration:
#
#debug_options ALL,3
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
#allowing .zopert.com domains
acl trustedDomains dstdomain -i "/etc/squid/trusted_domains.txt"
#excluded domains
acl excludedDomains dstdomain -i "/etc/squid/excluded_domains.txt"
#allowing grid console.
acl adminConsole urlpath_regex \/admin\/
#allowed urls
acl trustedUrls url_regex -i "/etc/squid/allowed_urls.txt"
# Example rule allowing access from your local networks.
http_port 3129 ssl-bump cert=/etc/squid/test.crt key=/etc/squid/test.key
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
#acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
#acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
#acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
#acl localnet src fc00::/7 # RFC 4193 local private network range
#acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl HTTPS proto HTTPS
#
# Recommended minimum Access Permission configuration:
# Only allow cachemgr access from localhost
#http_access allow manager localhost
http_access deny manager
#http_access allow allowurls
# Deny requests to certain unsafe ports
http_access deny !Safe_ports
# Deny CONNECT to other than secure SSL ports
#http_access deny CONNECT !SSL_ports
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
#allowing trusted domains(.zopert.com) only.
http_access allow trustedDomains adminConsole
http_access allow trustedDomains trustedUrls
#allowing static domains
http_access allow excludedDomains
#ssl_bump deny trustedDomains
http_access allow CONNECT trustedDomains
#http_access allow CONNECT
always_direct allow HTTPS
#ssl_bump allow adminConsole
ssl_bump allow trustedDomains
#we don't need to intercept other ssl sites.
ssl_bump deny all
# And finally deny all other access to this proxy
#sslproxy_cert_error allow all
#http_access allow localnet
http_access deny all
http_access deny CONNECT
#We recommend you to use at least the following line.
#hierarchy_stoplist cgi-bin ?
# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/spool/squid 100 16 256
# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid
# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
logformat squid %ts.%03tu %6tr %>a %>A %Ss/%03>Hs %<st %rm %ru %un %Sh/%<A %mt
cache_log /var/log/squid/cache.log
access_log /var/log/squid/access.log
SSL bump is doing a man-in-the-middle attack and the browser is complaining about this, which is the expected behavior. If you don't want this you need to import the CA (test.crt) as trusted in all browsers.