How can I keep Google Chromium from making unrequested outgoing connections? - configuration

I'm using the Chromium browser as the display for an embedded openSUSE-based project. Everything's going well, but I just now found out that Chromium is making dozens of connections to various *.ie100.net domains. I know this is Google's safe browsing system kicking in, but in my case this is useless because Chromium is just showing my own embedded server. I also know it isn't nefarious, and won't cause explicit harm, but I'm worried customers will see the traffic and get worried.
I've tried turning off safe browsing by editing .config/chromium/Default/Preferences...
"safebrowsing": {
"enabled": false
},
... but to no avail. I'm also worried that there are other Chromium features that may kick in and send backdoor traffic.
So, how can I tell Chromium to stop making unrequested outgoing connections? Do I need to block it at the system level?

My best solution has been to use iptables to block all outgoing request to ports 80 or 433. Yes, this prevents other browswers from being used in my product, but this isn't a problem for an embedded system.
Here's the script which cleans up any previous rules and then sets up blocking rules:
# Chrome has a nasty habit of connecting to various *.ie100.net domains, probably for
# safe browsing but who knows. Concern is that our customers will see these
# connections and wonder what the heck's going on. So, we block them.
# Kill any previous KILL_CHROME chain. First, get rid of all referencing rules
RULES=$(sudo iptables -L OUTPUT --line-numbers | grep KILL_CHROME | cut -d' ' -f1 | sort -r )
for rule in $RULES; do
sudo iptables -D OUTPUT $rule
done
# Clean out chain
sudo iptables --flush KILL_CHROME
# Remove chain
sudo iptables -X KILL_CHROME
# Now, build new rules. Add new iptables chain KILL_CHROME
sudo iptables -N KILL_CHROME
# Any newly-created outgoing tcp connections on eth0 to port 80 are routed to KILL_CHROME
sudo iptables -A OUTPUT -o eth0 -m conntrack --ctstate NEW -p tcp --dport 80 -j KILL_CHROME
# Any newly-created outgoing tcp connections on eth0 to port 443 are routed to KILL_CHROME
sudo iptables -A OUTPUT -o eth0 -m conntrack --ctstate NEW -p tcp --dport 443 -j KILL_CHROME
# Log every connection in KILL_CHROME
sudo iptables -A KILL_CHROME -j LOG --log-prefix "New Dropped: "
# And drop it like a hot potato.
sudo iptables -A KILL_CHROME -j
'Twould be good for Chromium to support some sort of flag to prevent this behavior, but since there doesn't seem to be one this is the best I can do.

Related

Opening port 80 on Oracle Cloud Infrastructure Compute node [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
The community reviewed whether to reopen this question 11 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
This is an elementary question however one I cannot seem to resolve by perusing the Oracle Cloud Infrastructure documentation. I've created an Ubuntu-based compute node, and it's attached to a subnet. In that subnet I've created a stateful rule with source 0.0.0.0/0, IP protocol: TCP, Source Port Range: All, Destination Port Range: 80.
There is no firewall configured on the server.
Despite this configuration I can't access the compute node's public IP. Any ideas?
I figured it out. The connectivity issue was due to Oracle's default use of iptables on all Oracle-provided images. Literally the very first thing I did when spinning up this instance was check ufw, presuming there were a few firewall restrictions in place. The ufw status was inactive, so I concluded the firewall was locally wide open. Because to my understanding both ufw and iptables look at the netfilter kernel firewall, and because ufw is the de facto (standard?) firewall solution on Ubuntu, I've no idea why they concluded it made sense to use iptables in this fashion. Maybe just to standardize across all images?
I learned about the rules by running:
$ sudo iptables -L
Then I saved the rules to a file so I could add the relevant ones back later:
$ sudo iptables-save > ~/iptables-rules
Then I ran these rules to effectively disable iptables by allowing all traffic through:
$ iptables -P INPUT ACCEPT
$ iptables -P OUTPUT ACCEPT
$ iptables -P FORWARD ACCEPT
$ iptables -F
To clear all iptables rules at once, run this command:
$ iptables --flush
Anyway, hope this helps somebody else out because documentation on the matter is non-existent.
When deploying compute instances at Oracle Cloud Infrastructure you need to take into account few things:
Create Internet Gateway (IGW).
Define routes to point to IGW.
Allow port 80 in the Security List associated with the IGW. By default you only have access to SSH and ICMP 3,4 type.
Allow connectivity on Compute's instance firewall (which is enabled by default).
In your example if you are using a OEL shape:
$ sudo firewall-cmd --zone=public --permanent --add-port=80/tcp
$ sudo firewall-cmd --reload
Always refer to the official guide: https://docs.cloud.oracle.com/en-us/iaas/developer-tutorials/tutorials/apache-on-ubuntu/01oci-ubuntu-apache-summary.htm
$ sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT
$ sudo netfilter-persistent save
$ sudo systemctl restart apache2
credited to https://medium.com/#fathi.ria/oracle-database-cloud-open-ports-on-oci-1af24f4eb9f2
Coumputer Instance(Such as Ubuntu) -> Virtual Cloud Network -> Security List -> Ingress Rules -> Please add a rule to allow access to port 80 from anywhere
Pre-Requisite
VM instance should have been created and running
Access to Public and Private keys used during the creation of VM instance
Log into the VM using SSH and run the following command
$ sudo iptables --list --line-numbers
It will show the details about Chain INPUT (policy ACCEPT). From the list
required to Delete REJECT all rule in the IPTABLES.
$ sudo iptables -D INPUT <Reject Line number>
e.g.
$ sudo iptables -D INPUT 6
Check if the REJECT rule is deleted
sudo iptables --list --line-numbers
Access the Default Security List and Edit Ingress Rules to Allow Internet Traffic on Port
Edit the INGRES Rule Add CIDR 0.0.0.0/0 TCP Destination 9999
(N): Networking >Virtual Cloud Networks> Virtual Cloud Network Details>Security Lists> Security List Details
Access your application via web browser
Type http://<public IP address of the VM>:port
I guess if you add the rule below to your iptables it should work; otherwise you'll be disturbing other rules which are related to block volume attachment that comes preconfigured on those Oracle images.
iptables -I INPUT 5 -i ens3 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
If you have not created Internet Gateway yet, that might be the reason. In order to connect the VCN with the public internet you need to have an Internet Gateway and a route table to direct the traffic through the gateway.

tcpdump doesn't captures properly on specific port

I'm in a network and i wanna capture ftp packets from another server in the network but i have a problem with tcpdump about this.
I've used this command :
tcpdump -i eth0 dst X.X.X.X -A and port 21
But it doesn't shows anything! ( i tested and sure that ftp port is 21 )
But if i use this on my server it works properly.
tcpdump -i eth0 -A and port 21
I've this problem when i enter " port " in the command. but if i enter a command without specific port it works and captures properly.
What is the problem?
Thanks.
I don't have enough reputation to ask a question, so this is part question and part insight.
Is the IP you're filtering on the client or the server for the FTP connection?
For the first command, try using src x.x.x.x or just host x.x.x.x and port 21.
For the second command, the "and" is not necessary with the -A flag. This should look more like this:
tcpdump -A -i eth0 port 21
tcpdump -Ai eth0 port 21
Another thing I've seen is if there are vlan tags, normal filtering won't work without adding "vlan and " to your filter. For example:
tcpdump -A -i eth0 "vlan and host x.x.x.x and port 21"
Also keep in mind that FTP uses a control and data connection. The control is over port 21, but the data can vary depending on whether you're using active or passive FTP.

iptables causing external sites to have problems when connecting to mysql

Recently I've managed to block all unused ports on my dedicated server (Linux CentOS latest 64-bit) but whenever I do so, sites that connect to my database just simply cannot connect.
iptables -A INPUT -i lo -p tcp --dport 3306 -j ACCEPT
iptables -A OUTPUT -o lo -p tcp --sport 3306 -j ACCEPT
I believe it has something to do with the OUTPUT port, but I am not sure.
Thanks.
If you want to allow remote incoming mysql connections you will need to define an INPUT rule that is not isolated to your local interface:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT
In Centos this will be defined in the /etc/sysconfig/iptables file. Then restart:
sudo service iptables restart
Alternatively, from the command line, you can use:
sudo system-config-firewall-tui
To configure your firewall, it is in the package of the same name:
sudo yum install system-config-firewall-tui -y

kvm net devices sharing traffic

Using linux KVM/QEMU, I have a virtual machine with two NICs presented at the host as tap interfaces:
-net nic,macaddr=AA:AA:AA:AA:00:01,model=virtio \
-net tap,ifname=tap0a,script=ifupbr0.sh \
-net nic,macaddr=AA:AA:AA:AA:00:02,model=virtio \
-net tap,ifname=tap0b,script=ifupbr1.sh \
In the guest (also running linux), these are configured with different subnets:
eth0 Link encap:Ethernet HWaddr aa:aa:aa:aa:00:01
inet addr:10.0.0.10 Bcast:10.0.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
eth1 Link encap:Ethernet HWaddr aa:aa:aa:aa:00:02
inet addr:192.168.0.10 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Routes only go to the expected places:
ip route list
default via 10.0.0.1 dev eth0 metric 100
10.0.0.0/16 dev eth0 proto kernel scope link src 10.0.0.10
192.168.0.0/24 dev eth1 proto kernel scope link src 192.168.0.10
But somehow don't seem to be treated by KVM as being connected to distinct networks.
If I trace the individual interfaces, they both see the same traffic.
For example, if I ping on the 10.0.0.0/16 subnet, ping -I eth0 10.0.0.1
And simultaneously trace the two tap interfaces with tcpdump , I see the pings coming through on both tap interfaces:
sudo tcpdump -n -i tap0a
10:51:56.308190 IP 10.0.0.10 > 10.0.0.1: ICMP echo request, id 867, seq 1, length 64
10:51:56.308217 IP 10.0.0.1 > 10.0.0.10: ICMP echo reply, id 867, seq 1, length 64
sudo tcpdump -n -i tap0b
10:51:56.308190 IP 10.0.0.10 > 10.0.0.1: ICMP echo request, id 867, seq 1, length 64
10:51:56.308217 IP 10.0.0.1 > 10.0.0.10: ICMP echo reply, id 867, seq 1, length 64
That seems strange to me since it's pretty clear that the guest OS would have only actually sent this on the tap0a interface.
Is this expected behavior? Is there a way to keep the interfaces separate as I expected?
Is this some misconfiguration issue on my part?
Additional info, here are the two ifupbr0.sh and ifupbr1.sh scripts:
% cat ifupbr1.sh
#!/bin/sh
set -x
switch=br0
echo args = $*
if [ -n "$1" ];then
sudo tunctl -u `whoami` -t $1
sudo ip link set $1 up
sleep 0.5s
sudo brctl addif $switch $1
exit 0
else
echo "Error: no interface specified"
exit 1
fi
% cat ifupbr1.sh
#!/bin/sh
set -x
switch=br1
echo args = $*
if [ -n "$1" ];then
sudo tunctl -u `whoami` -t $1
sudo ip link set $1 up
sleep 0.5s
sudo brctl addif $switch $1
exit 0
else
echo "Error: no interface specified"
exit 1
fi
I see this problem even if I detach the "tap0b" interface from the br1. It still shows the traffic that I'd expect only for tap0a. That is, even when:
% brctl show
bridge name bridge id STP enabled interfaces
br0 8000.26a2d168234b no tap0a
br1 8000.000000000000 no
br2 8000.000000000000 no
It looks like I answered my own question eventually, but I'll document it for anyone else that hits this.
Evidently this really is the intended behavior of KVM for the options I was using.
At this URL:
http://wiki.qemu.org/Documentation/Networking
I found:
QEMU previously used the -net nic option instead of -device DEVNAME
and -net TYPE instead of -netdev TYPE. This is considered obsolete
since QEMU 0.12, although it continues to work.
The legacy syntax to create virtual network devices is:
-net nic,model=MODEL
And sure enough, I'm using this legacy syntax. I thought the new syntax was just more flexible but it apparently actually has this intended behavior:
The obsolete -net syntax automatically created an emulated hub (called
a QEMU "VLAN", for virtual LAN) that forwards traffic from any device
connected to it to every other device on the "VLAN". It is not an
802.1q VLAN, just an isolated network segment.
The vlans it supports are also just emulated hubs, and don't forward out to the host at all as best I can tell.
Regardless, I reworked the QEMU options to use the "new" netdev syntax and obtained the behavior I wanted here.
What do you have in the ifupbr0.sh and ifupbr1.sh scripts? What bridging tool are you using? That is the important piece which segregates your traffic to the interfaces desired.
I've used openvswitch to handle my bridging stuff. But before that I used bridge-utils in Debian.
I wrote some information about bridge-utils at http://blog.raymond.burkholder.net/index.php?/archives/31-QEMUKVM-BridgeTap-Network-Configuration.html. I have other posts regarding what I did with bridging on the OpenVSwitch side of things.

how to configure eth0 as a sender udp port in tcl

I have a multiple network interfaces to my pc. I want to configure only eth0 as a udp sender for sending packets to other pc. How can we specify the interface name to be configured as udp sender. I have installed libudp-tcl, but not able to find the way to do it. Can anybody tell me the exact way to do that.
The udp package can't do what you want. As kostix mentioned you can always modify the udp package at the C level to expose the binding interface to tcl.
But there is an alternative work-around.
On Linux you can use iptables to restrict packets for specific ports to only go through specific interfaces. So, just open a UDP port of your choice (for example 9999) and then only allow packets from that port to go through eth0 and drop it from other interfaces.
For example, say your application uses UDP port 9999, then set up the following iptables rules:
# Accept udp packets from port 9999 for eth0
iptables -A OUTPUT -i eth0 -p udp --source-port 9999 -j ACCEPT
# Drop udp packets from port 9999 for all other interfaces
iptables -A OUTPUT -p udp --source-port 9999 -j DROP
Or you can do it in tcl using exec:
# Warning! Need to be root to do this:
set myPort 9999
exec /sbin/iptables -A OUTPUT -i eth0 -p udp --source-port $myPort -j ACCEPT
exec /sbin/iptables -A OUTPUT -p udp --source-port $myPort -j DROP
But always remember to delete your added rules before your program exits:
# Clearup iptables rules
exec /sbin/iptables -D OUTPUT -i eth0 -p udp --source-port $myPort -j ACCEPT
exec /sbin/iptables -D OUTPUT -p udp --source-port $myPort -j DROP
From what I gather, to do what you want, you need to bind(2) to a specific IP address (one of those available on eth0) first, but the udp package does not appear to support anything like this.
So it looks like you need to patch the package yourself. Tcl has excellent C API so it's not really hard if you're familiar with C.