I have a problem with Squid. I wanted to block the possibility of accessing www to selected people on the principle that I define allowed domains and block all the rest. I can not deal with this configuration. What I have done so far is a working proxy with authentication.
Can you help me to solve my problem?
Regards
acl lan src 192.168.1.0/24
# It does not work
acl TimeWorkUser1 time M T W H F A 7:00-15:00
acl User1 src 192.168.1.100
acl GoodSites dstdomain "/etc/squid/users/GoodSites.cfg"
# end
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 443 # https
acl CONNECT method CONNECT
auth_param basic program /usr/lib64/squid/basic_ncsa_auth /etc/squid/passwd
auth_param basic children 5
auth_param basic credentialsttl 8 hours
auth_param basic realm Proxy: Wymagana autoryzacja
acl ncsa_users proxy_auth REQUIRED
http_access allow ncsa_users
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
# It does not work
http_access deny User1 !GoodSites
http_access allow TimeWorkUser1
# end
http_access allow localhost
http_access allow lan
http_access deny all
http_port 3128
coredump_dir /var/spool/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
# cat /etc/squid/users/GoodSites.cfg
www.somedomain.com
somedomain.com
Related
I have many ips on same server and I am using squid basic authentication.
Example - I have two ips and 2 users and one single port 3128. The issue is any user can use any outgoing ips.
Below is my squid configuration:
acl http proto http
acl port_80 port 80
acl port_443 port 443
acl CONNECT method CONNECT
auth_param basic program /usr/bin/python /path/to/authenticationscript
auth_param basic realm Please enter username and password
auth_param basic credentialsttl 1 second
acl AuthUsers proxy_auth REQUIRED
external_acl_type userip %SRC %LOGIN /usr/lib/squid/ext_file_userip_acl -f /path/to/config.file
acl userip external userip
http_access allow userip
http_access deny all
http_port 3128 name=0
acl ip1 myportname 0
tcp_outgoing_address x.x.x.0 ip1
acl ip2 myportname 1
tcp_outgoing_address x.x.x.1 ip2
where x.x.x.x is the ipaddress of the server.
In the config.file I am having
x.x.x.0(ipaddress1) user1
x.x.x.1(ipaddress2) user2
How can I let one user to connect to one ip?
I found the solution.
I need to change the http_port and acl of myportname to below:
http_port 3128
acl ip1 myip x.x.x.0
tcp_outgoing_address x.x.x.0 ip1
acl ip2 myip x.x.x.1
tcp_outgoing_address x.x.x.1 ip2
In summary, although I've set a firewall rule that allows tcp:80, my GCE instance, which is on the "default" network, is not accepting connections to port 80. It appears only port 22 is open on my instance. I can ping it, but can't traceroute to it in under 64 hops.
What follows is my investigation that led me to those conclusions.
gcloud beta compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
default-allow-http default INGRESS 1000 tcp:80
default-allow-https default INGRESS 1000 tcp:443
default-allow-icmp default INGRESS 65534 icmp
default-allow-internal default INGRESS 65534 tcp:0-65535,udp:0-65535,icmp
default-allow-rdp default INGRESS 65534 tcp:3389
default-allow-ssh default INGRESS 65534 tcp:22
temp default INGRESS 1000 tcp:8888
gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
ssrf3 us-west1-c f1-micro true 10.138.0.4 35.197.33.182 RUNNING
gcloud compute instances describe ssrf3
...
name: ssrf3
networkInterfaces:
- accessConfigs:
- kind: compute#accessConfig
name: external-nat
natIP: 35.197.33.182
type: ONE_TO_ONE_NAT
kind: compute#networkInterface
name: nic0
network: https://www.googleapis.com/compute/v1/projects/hack-170416/global/networks/default
networkIP: 10.138.0.4
subnetwork: https://www.googleapis.com/compute/v1/projects/hack-170416/regions/us-west1/subnetworks/default
...
tags:
fingerprint: 6smc4R4d39I=
items:
- http-server
- https-server
I ssh into 35.197.33.182 (which is the ssrf3 instance) and run:
sudo nc -l -vv -p 80
On my local machine, I run:
nc 35.197.33.182 80 -vv
hey
but nothing happens.
So I try to ping the host. That looks healthy:
ping 35.197.33.182
PING 35.197.33.182 (35.197.33.182): 56 data bytes
64 bytes from 35.197.33.182: icmp_seq=0 ttl=57 time=69.172 ms
64 bytes from 35.197.33.182: icmp_seq=1 ttl=57 time=21.509 ms
Traceroute quits after 64 hops, without reaching the 35.197.33.182 destination.
So I check which ports are open with nmap:
nmap 35.197.33.182
Starting Nmap 7.12 ( https://nmap.org ) at 2017-06-18 16:39 PDT
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 3.06 seconds
nmap 35.197.33.182 -Pn
Starting Nmap 7.12 ( https://nmap.org ) at 2017-06-18 16:39 PDT
Nmap scan report for 182.33.197.35.bc.googleusercontent.com (35.197.33.182)
Host is up (0.022s latency).
Not shown: 999 filtered ports
PORT STATE SERVICE
22/tcp open ssh
Nmap done: 1 IP address (1 host up) scanned in 6.84 seconds
… even when I’m running nc -l -p 80 on 35.197.33.182.
Ensure that VM level firewall is not intervening. For example, Container-Optimized OS is a bit special in comparison to all other default images:
By default, the Container-Optimized OS host firewall allows only outgoing connections, and accepts incoming connections only through the SSH service. To accept incoming connections on a Container-Optimized OS instance, you must open the ports your services are listening on.
https://cloud.google.com/container-optimized-os/docs/how-to/firewall
Checking the two check boxes "Allow HTTP traffic" and "Allow HTTPS traffic" did the trick. This created two Firewall rules, that opened the ports 80 and 443.
Manually adding rules for those port didn't work for some reason, but it worked with checking the boxes.
On a quick glance, your setup seems to be correct.
You have allowed INGRESS tcp:80 for all instances in the default network.
Your VM is on the default network.
Traceroute will not give a good indication when you have VMs running on Cloud providers, because of the use of SDNs, virtual networks and whole bunch of intermediate networking infrastructure unfortunately.
One thing I notice is that your instance has 2 tags http-server and https-server. These could be used by some other firewall rules possibly which is somehow blocking traffic to your VM's tcp:80 port.
There are other variables in your setup and I'm happy to debug if needed further.
Tag based firewall rules
You can try tag based firewall rules which will apply the firewall rule only to instances which have the specified target tag.
Network tags are used by networks to identify which instances are
subject to certain firewall rules and network routes. For example, if
you have several VM instances that are serving a large website, tag
these instances with a shared word or term and then use that tag to
apply a firewall rule that allows HTTP access to those instances. Tags
are also reflected in the metadata server, so you can use them for
applications running on your instances. When you create a firewall
rule, you can provide either sourceRanges or sourceTags but not both.
# Add a new tag based firewall rule to allow ingress tcp:80
gcloud compute firewall-rules create rule-allow-tcp-80 --source-ranges 0.0.0.0/0 --target-tags allow-tcp-80 --allow tcp:80
# Add the allow-tcp-80 target tag to the VM ssrf3
gcloud compute instances add-tags ssrf3 --tags allow-tcp-80
It might take a few seconds to couple of minutes for the changes to take effect.
NOTE: Since you're opening up ports of VM's external IPs to the internet, take care to restrict access accordingly as per the needs of your application running on these ports.
After lots of trail and error, the following worked for me on ubuntu-1404-trusty-v20190514
, with a nodejs app listening on port 8080. Accept port 80 and 8080, and then redirect 80 to 8080.
sudo iptables -A INPUT -i eth0 -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -i eth0 -p tcp --dport 8080 -j ACCEPT
sudo iptables -t nat -A OUTPUT -o lo -p tcp --dport 80 -j REDIRECT --to-port 8080
sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
Incase you are a windows server instance , You could try to turn off the Windows Defender and check if it's blocking the incoming connection.
I trying to configure haproxy-1.7.3 with HTTP2 support next acl rules:
acl rule0 hdr_beg(host) -i i0.
acl rule02 ssl_fc_alpn -i h2 and hdr_beg(host) -i i0.
use_backend i02 if rule02
use_backend i0 if rule0
acl rule1 hdr_beg(host) -i i1.
acl rule12 ssl_fc_alpn -i h2 and hdr_beg(host) -i i1.
use_backend i12 if rule12
use_backend i1 if rule1
backend i0
server node1 192.168.40.51:5000 ssl verify none
backend i02
mode tcp
http-request add-header X-Forwarded-Proto https
server node1 192.168.40.51:5001 check send-proxy
backend i1
server node1 192.168.40.23:5000 ssl verify none
backend i12
mode tcp
http-request add-header X-Forwarded-Proto https
server node1 192.168.40.23:5001 check send-proxy
I want all requests for subdomain i0. forward to i0.myserver.com and all requests for for subdomain i1. forward to i1.myserver.com with HTTP2 support.
But in my case all requests always forward to i0.myserver.com. What is wrong with this acl rules?
So, acl in tcp mode is not working for analyze headers. And working config is below:
acl rule02 ssl_fc_alpn -i h2
acl rule0 ssl_fc_sni -i i0.mydomian.com
use_backend i02 if rule02 rule0
use_backend i0 if rule0
acl rule12 ssl_fc_alpn -i h2
acl rule1 ssl_fc_sni -i i1.mydomain.com
use_backend i12 if rule12 rule1
use_backend i1 if rule1
Maybe it's will be useful for somebody.
In our office, we are using squid to restrict users to connect only particular web sites and urls. If a user is connecting a web page via https, url_regex acl will not work. In a https request, we have control over domain only. But we need to restrict on url level. So, we used ssl bump to intercept the https requests. Its working fine, but we got some ssl warnings in browser.
Is this possible to intercept a ssl connection in bump without any browser warnings?
squid configuration file
#
# Recommended minimum configuration:
#
#debug_options ALL,3
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
#allowing .zopert.com domains
acl trustedDomains dstdomain -i "/etc/squid/trusted_domains.txt"
#excluded domains
acl excludedDomains dstdomain -i "/etc/squid/excluded_domains.txt"
#allowing grid console.
acl adminConsole urlpath_regex \/admin\/
#allowed urls
acl trustedUrls url_regex -i "/etc/squid/allowed_urls.txt"
# Example rule allowing access from your local networks.
http_port 3129 ssl-bump cert=/etc/squid/test.crt key=/etc/squid/test.key
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
#acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
#acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
#acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
#acl localnet src fc00::/7 # RFC 4193 local private network range
#acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl HTTPS proto HTTPS
#
# Recommended minimum Access Permission configuration:
# Only allow cachemgr access from localhost
#http_access allow manager localhost
http_access deny manager
#http_access allow allowurls
# Deny requests to certain unsafe ports
http_access deny !Safe_ports
# Deny CONNECT to other than secure SSL ports
#http_access deny CONNECT !SSL_ports
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
#allowing trusted domains(.zopert.com) only.
http_access allow trustedDomains adminConsole
http_access allow trustedDomains trustedUrls
#allowing static domains
http_access allow excludedDomains
#ssl_bump deny trustedDomains
http_access allow CONNECT trustedDomains
#http_access allow CONNECT
always_direct allow HTTPS
#ssl_bump allow adminConsole
ssl_bump allow trustedDomains
#we don't need to intercept other ssl sites.
ssl_bump deny all
# And finally deny all other access to this proxy
#sslproxy_cert_error allow all
#http_access allow localnet
http_access deny all
http_access deny CONNECT
#We recommend you to use at least the following line.
#hierarchy_stoplist cgi-bin ?
# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/spool/squid 100 16 256
# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid
# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
logformat squid %ts.%03tu %6tr %>a %>A %Ss/%03>Hs %<st %rm %ru %un %Sh/%<A %mt
cache_log /var/log/squid/cache.log
access_log /var/log/squid/access.log
SSL bump is doing a man-in-the-middle attack and the browser is complaining about this, which is the expected behavior. If you don't want this you need to import the CA (test.crt) as trusted in all browsers.
I set up a Vagrant VirtualBox box for Debian Wheezy following this instruction.
I installed nginx and php5-fpm on this virtual machine. I can access my guest machine via 127.0.0.1:8080 from host. It can also serve php files and phpinfo() works correctly, too.
However, when I try to access a remote MySQL server from a php file, the request always times out and I get a 504 Gateway Timeout error.
I noticed the followings.
In my nginx conf file, I have this line fastcgi_pass unix:/var/run/php5-fpm.sock;.
In /etc/php5/fpm/pool.d/www.conf, I have listen = /var/run/php5-fpm.sock.
php5-fpm.sock exists in /var/run/.
If I use 127.0.0.1:9000 instead of the socket, I still get the 504 Gateway Timeout error, but I get the error immediately without any waiting.
I added proxy_read_timeout 300; in my nginx conf, but this did not solve the issue.
My Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# Every Vagrant virtual environment requires a box to build off of.
config.vm.box = "wheezy32"
config.vm.provision :shell, :path => "dev/bootstrap.sh"
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
config.vm.network :forwarded_port, guest: 80, host: 8080
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network :private_network, ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network :public_network
end
My nginx conf
server {
root /var/www/sites/mysite/public_html;
index index.html index.htm index.php;
# Make site accessible from http://localhost/
server_name localhost;
access_log /var/www/logs/mysite/mysite.access_log;
error_log /var/www/logs/mysite/mysite.error_log;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.html;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
proxy_read_timeout 300;
}
location /doc/ {
alias /usr/share/doc/;
autoindex on;
allow 127.0.0.1;
allow ::1;
deny all;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
# With php5-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
/var/www/logs/mysite/mysite.error_log
2013/06/16 23:47:27 [error] 2567#0: *23 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.0.2.2, server: localhost, request: "GET /test.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "127.0.0.1:8080"
2013/06/16 23:47:27 [error] 2567#0: *23 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: 10.0.2.2, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "127.0.0.1:8080"
Here is how I attempt to connect to the remote MySQL server.
require_once('/var/www/sites/mysite/includes/db_constants.php');
try {
$dsn = 'mysql:host=172.16.0.51;dbname=' . DB_NAME . ';charset=utf8';
$db = new PDO($dsn,DB_USER,DB_PASS);
$db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$db->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
} catch (PDOException $e) {
header('HTTP/1.1 500');
exit;
} catch (Exception $e) {
header('HTTP/1.1 500');
exit;
}
What am I missing?
As #cmur2 said, I was using the private IP to connect to the remove server, and that was why it did not work. I changed it to a public IP and now it is working correctly.