Deployed single node cluster with oc
$ sudo oc cluster up --public-hostname=$PUBLIC_IP
$ oc version
oc v3.7.14
kubernetes v1.7.6+a08f5eeb62
features: Basic-Auth GSSAPI Kerberos SPNEGO
Added allowHostPorts: true in oc edit scc restricted's config.
Deployed an application with NodePort config
tiger-dev 172.30.30.215 <nodes> 8080:30111/TCP 10d
I was not able to access the application with $NODEIP:30111, but I can see the traffic coming into the node and NOT going out.
Update1
iptables rules -> https://pastebin.com/NPeZM26W
$ sudo iptables-save | grep 30111
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/tiger-dev:http" -m tcp --dport 30111 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/tiger-dev:http" -m tcp --dport 30111 -j KUBE-SVC-PPQAPMXK5BHYARPU
Related
Using kubeadm with a two-node cluster on VirtualBox Centos7 VMs.
I have an app written in R and a mysql database each in their own pods.
I've successfully followed instructions to setup nginx ingress controller so that the app can be reached outside the VMs, by my local machine. check :)
However, now when the app (R) tries to reach to the mysql service, the name doesn't resolve. Same with pinging 'mysql' from bash. This no longer works:
mydb<-dbConnect(MySQL(), user = 'root', password ='password',
dbname = 'prototype', host = 'mysql')
Instead I have to use the pod's IP, which does work.
mydb<-dbConnect(MySQL(), user = 'root', password ='password',
dbname = 'prototype', host = '10.244.1.233')
However, isn't this going to change upon reboots and system changes? I'd like a more static way to refer to the mysql db.
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.56.101:6443 5h
mysql 10.244.1.233:3306 41m
r-user-app 10.244.1.232:8787,10.244.1.232:3838 2h
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h
mysql ClusterIP 10.96.138.132 <none> 3306/TCP 28m
r-user-app LoadBalancer 10.100.228.80 <pending> 3838:32467/TCP,8787:31754/TCP 2h
$ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
r-user-app storage.test.com 80, 443 3h
$ kubectl describe service mysql
Name: mysql
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=neurocore,tier=mysql
Type: ClusterIP
IP: 10.96.138.132
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: 10.244.1.236:3306
Session Affinity: None
Events: <none>
ps auxw | grep kube-proxy
root 1914 0.1 0.3 44848 21668 ? Ssl 11:03 0:20 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
root 29218 0.0 0.0 112660 980 pts/1 R+ 14:23 0:00 grep --color=auto kube-proxy
$iptables-save | grep mysql
-A KUBE-SEP-7P27CEQL6WJZRBQ5 -s 10.244.1.236/32 -m comment --comment "default/mysql:" -j KUBE-MARK-MASQ
-A KUBE-SEP-7P27CEQL6WJZRBQ5 -p tcp -m comment --comment "default/mysql:" -m tcp -j DNAT --to-destination 10.244.1.236:3306
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.138.132/32 -p tcp -m comment --comment "default/mysql: cluster IP" -m tcp --dport 3306 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.138.132/32 -p tcp -m comment --comment "default/mysql: cluster IP" -m tcp --dport 3306 -j KUBE-SVC-M7XME3WTB36R42AM
-A KUBE-SVC-M7XME3WTB36R42AM -m comment --comment "default/mysql:" -j KUBE-SEP-7P27CEQL6WJZRBQ5
Based on your svc, you should be able to reach mysql:3306 from within the cluster.
Have you tried kubectl exec -it r-user-app bash and pinging mysql from within the R app container? host mysql should return something like "mysql.cluster.local" has address 127.21.0.01" (example). Or return any error. If there isn't an error then maybe the dbConnect() doesn't like the host name?
Looks like your service is configured well.
ping 10.96.138.132 no response :(
Each Service has a static address, so the situation when you cannot ping it is normal because that is just a virtual address and the requests to its processing is a bit different than the requests to the real addresses.
I see here only 2 reasons why you can have that problem:
Something wrong with DNS resolution in the container with your application. Try to use 10.96.138.132 as MySQL address instead of mysql. If it fixes your problem - that is a resolving problem. BTW, you can use Service IP instead of DNS, as I already told - it is static.
Something wrong with the forwarding rules. Check kube-proxy logs in kube-system namespace, maybe you will get any additional info for debugging.
This is actually an issue with flannel. When I switched to use Weave as the CNI, the service discovery and DNS kube works fine.
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Goodday,
I have a centos 7 machine that is going to be a webserver and a teamspeak server at the same time. I have configured the iptables correctly for my webserver: Nginx and Mariadb are available to the designated ports. Now I have my teamspeak 3 server installed but it cannot contact the Mysql (Mariadb) database on the same machine. I dont know what iptable entry I should add to make it contact it.
These are my iptable rules:
//Fresh start
iptables -F
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP
//SSH
iptables -A INPUT -p tcp -s <ADMIN IP> --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
//allow rpm and stuff
iptables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --dport 80 -m state --state NEW -j ACCEPT
iptables -A OUTPUT -p tcp --dport 53 -m state --state NEW -j ACCEPT
iptables -A OUTPUT -p udp --dport 53 -m state --state NEW -j ACCEPT
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
//HTTP(S) Webserver
iptables -A INPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT
//MariaDB Mysql
iptables -A INPUT -p tcp -s <ADMIN IP> --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 3306 -m state --state ESTABLISHED -j ACCEPT
//Teamspeak
iptables -A INPUT -p udp --dport 9987 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p udp --sport 9987 -m state --state ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp --dport 2008 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 2008 -m state --state ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp --dport 30033 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 30033 -m state --state ESTABLISHED -j ACCEPT
//save & reboot
service iptables save
systemctl restart iptables
What entry am I missing to make it work? The logs of teamspeak cleary say it cannot connect to 127.0.0.1 and when I turn off iptables everything works so it has to be something I am missing. I also dont want to do a global loopback entry!
Add the following to your //MariaDB MySQL rule
iptables -A INPUT -p tcp -s 127.0.0.1 --dport 3306 -j ACCEPT
iptables -A OUTPUT -p tcp -d 127.0.0.1 --sport 3306 ! --syn -j ACCEPT
I am setting up two docker containers
container1 container2
| | |
eth0 eth1 |
| | eth1
docker0 docker1<----------------
|
|
internet
docker0 and docker1 are the bridges.
I have ip forwarding to 1 in both host and in containers.
I have setup
iptables -I POSTROUTING -t nat -o eth0 -j MASQUERADE in container 1
Still i am not able to ping anything from container 2 to internet. I can see that packets are being received at eth1 of container 1.
OS: ubuntu 13.10
docker version: 0.11.1, build fb99f99
Am i missing some configuration?
Steps to reproduce:
SERV=$(docker run --privileged=true -i -d -t -v ~/Projects/code/myproject/build:/build:ro debian:7.4 /bin/bash)
CLI=$(docker run --privileged=true -i -d -t -v ~/Projects/code/myproject/build:/build:ro debian:7.4 /bin/bash)
sudo pipework br1 $SERV 10.1.0.1/8
sudo pipework br1 $CLI 10.1.0.3/8
In $SERV:
iptables -I POSTROUTING -t nat -o eth0 -j MASQUERADE
In $CLI
Disable the interface eth0. Set default route to eth1 interface.
Now ping is happening to 10.1.0.1 from $CLI but not to the internet.
Hm, it should work as you described. Maybe the default route is not configured correctly.
This is what I did:
SERV=$(docker run -i --privileged -d -t debian:7.4 /bin/bash)
CLI=$(docker run --privileged -i -d -t debian:7.4 /bin/bash)
docker exec -ti $CLI ping google.de # Internet up
docker exec -ti $CLI ip link set eth0 down
docker exec -ti $CLI ping google.de # Internet down
pipework br1 $SERV 10.1.0.1/8
pipework br1 $CLI 10.1.0.2/8
docker exec -ti $SERV apt-get install -y iptables
docker exec -ti $SERV iptables -I POSTROUTING -t nat -o eth0 -j MASQUERADE
docker exec -ti $CLI ip route add default via 10.1.0.1 dev eth1
docker exec -ti $CLI ping google.de # Internet up
docker exec -ti $CLI apt-get install -y traceroute
docker exec -ti $CLI traceroute google.de
The only way iptables is changed is when executed from Docker host on a containers run with
--privileged
Here is a script:
iptables along with a couple of tools are installed during the image build (Dcokerfile)
inetutils-traceroute iputils-tracepath iptables
Here I use "phusion-dockerbase", you can use whatever image you want:
#!/bin/bash
### ==> Install & configure iptable during build
#RUN sudo apt-get install -y inetutils-traceroute iputils-tracepath iptables
# Build the image
#sudo docker build -t mybimage -f phusion-dockerbase .
### container1
C1=$(docker run --privileged -i -d -t mybimage /bin/bash)
sudo docker exec -ti $C1 iptables -I POSTROUTING -t nat -o eth0 -j MASQUERADE
sleep 2
sudo pipework br6 -i eth1 $C1 192.168.66.1/24
### container2
lxterminal -e "sudo docker run -ti --name c2name mybimage /bin/bash"
sleep 2
C2="$(sudo docker ps | grep c2name | awk '{ print $1; }')"
sudo pipework br6 -i eth1 $C2 192.168.66.2/24#192.168.66.1
Result:
./lab.sh
From the Container1 (I use lxterminal to open it in a new window):
Note that, as soon as you stop container1, corresponding pipework and iptable modification are lost and even when restart the stopped container, you will need to reissue the commands:
pipework br6 -i eth1 52b95d6052f7 192.168.66.1/24
docker exec 52b95d6052f7 iptables -I POSTROUTING -t nat -o eth0 -j MASQUERADE
for container1 to act again like nat box.
Even commiting the running container1 to a new image and run a new container from it, doesn't help.
Recently I've managed to block all unused ports on my dedicated server (Linux CentOS latest 64-bit) but whenever I do so, sites that connect to my database just simply cannot connect.
iptables -A INPUT -i lo -p tcp --dport 3306 -j ACCEPT
iptables -A OUTPUT -o lo -p tcp --sport 3306 -j ACCEPT
I believe it has something to do with the OUTPUT port, but I am not sure.
Thanks.
If you want to allow remote incoming mysql connections you will need to define an INPUT rule that is not isolated to your local interface:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT
In Centos this will be defined in the /etc/sysconfig/iptables file. Then restart:
sudo service iptables restart
Alternatively, from the command line, you can use:
sudo system-config-firewall-tui
To configure your firewall, it is in the package of the same name:
sudo yum install system-config-firewall-tui -y
I want to whitelist 2 external ip-adresses vor port 3306 (mysql), but block all other IP-adresses to the port 3306 on a debian server running a mysql-instance. Both external ip-adresses should be able to connect to the mysql-server.
What is the best way in iptables?
What i did:
/sbin/iptables -A INPUT -p tcp -d 127.0.0.1 --dport 3306 -j ACCEPT
/sbin/iptables -A INPUT -p tcp -d 1.1.1.1.1 --dport 3306 -j ACCEPT
/sbin/iptables -A INPUT -p tcp -d 85.x.x.x --dport 3306 -j ACCEPT
(1.1.1.1 is an internal ip and masked here for security purposes)
## Block all connections to 3306 ##
/sbin/iptables -A INPUT -p tcp --dport 3306 -j DROP
What happened:
every external ip is locked and can't connect
What should happen:
every external ip will be locked cand can't connect but not 1.1.1.1 and 85.x.x.x and 127.0.0.1
iptables -N mysql # create chain for mysql
iptables -A mysql --src 127.0.0.1 -j ACCEPT
iptables -A mysql --src 1.1.1.1.1 -j ACCEPT
iptables -A mysql --src 85.x.x.x -j ACCEPT
iptables -A mysql -j DROP # drop packets from other hosts
iptables -I INPUT -m tcp -p tcp --dport 3306 -j mysql # use chain for packets to MySQL port