Conflicting docker IP in elastic beanstalk deployment - amazon-elastic-beanstalk

I have a elastic beanstalk app which is running on the docker platform.
I am trying to connect to a private IP (172.17.57.52) which is in conflict with the docker bridge IP range.
[ec2-user#ip-172-18-1-22 ~]$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.18.1.1 0.0.0.0 UG 0 0 0 eth0
169.254.169.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.1.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0
I've gone through the documentation for Dockerrun.aws.json and there currently there is no provision to configure the network manually.
Is there a way to manually configure the docker IP range in elasticbean stalk?
I've gone through the documentation for Dockerrun.aws.json and there currently there is no provision to configure the network manually.

Related

QEMU hostfwd works only for some ports

I compiled qemu-system-x86_64 on aarch64 host, and was able to run a x86_64 guest with a command like
qemu-system-x86_64 -m 4096 -drive file=vmimage.qcow2,if=virtio \
-boot once=c,menu=on -net nic,model=virtio-net-pci \
-net user,hostfwd=tcp::8080-:80,hostfwd=tcp::22222-:22
I could ssh into the guest using
ssh -p22222 user#localhost
Meanwhile, port 80 was not forwarded successfully.
For debugging, I used nc to listen to port 80 inside the guest
nc -l 80
Then in the host, I connected to the forwarded port
nc localhost 8080
However, it was unable to connect to guest nc .
I tried the monitor interface. When the host nc command is executed, info usernet shows following:
(qemu) info usernet
Hub 0 (#net162):
Protocol[State] FD Source Address Port Dest. Address Port RecvQ SendQ
TCP[SYN_SENT] 33 127.0.0.1 8080 10.0.2.15 80 0 0
TCP[ESTABLISHED] 21 127.0.0.1 22222 10.0.2.15 22 0 0
TCP[HOST_FORWARD] 12 * 8080 10.0.2.15 80 0 0
TCP[HOST_FORWARD] 11 * 22222 10.0.2.15 22 0 0
...
I believe the SYN_SENT (FD 33) corresponded to the host nc command, and this matched the HOST_FORWARD line (FD 12). However, it never became ESTABLISHED. And a few seconds later, nc died with Connection reset by peer. , and the FD 33 line disappeared.
If I nc localhost 22222, I can see the OpenSSH banner.
So it seems only port 22 forwarded. Any idea about the cause or how to debug?
Both host and guest had no firewalliptables configured, and SELinux is permissive.
Thanks
Edit:
As a temporary workaround, I configured a second nic, and used port 22 of the new interface for forwarding my service. I also switch to the newer -nic option, but hostfwd still worked for port 22 only.
qemu-system-x86_64 -m 4096 -drive file=vmimage.qcow2,if=virtio \
-boot once=c,menu=on \
-nic user,model=virtio-net-pci,hostfwd=tcp::60022-:22 \
-nic user,model=virtio-net-pci,net=10.0.3.0/24,hostfwd=tcp::8080-10.0.3.15:22
To forward successfully, I also need to
Configure sshd to listen to port 22 the first nic only.
Configure my service to listen to port 22 of the second nic.
Configure the second nic to use a different network. Otherwise, both nics were assigned the same IP (10.0.2.15. I may better hardcode the IP for both nics.)
The problem was actually about firewall. My VM (based on Oracle Linux 8.5 on Oracle Linux VM Templates) actually had firewall rules in both iptables and nft. After disabling both iptables and nft, the port forward worked.

Podman network bridge to different network interface

I'm using podman version 3.4.2 on Fedora 35, and trying to expose Firebird server on local network.
I was able to pull containers, do install of SQL server inside, but having trouble to expose this SQL server within container on the local network.
I have eth0 with local network IP 192.168.100.1 (where I want SQL from container to be exposed) and eth1 which is device with public IP 1.2.3.4. I want to do rootfull installation. I used following command:
podman run -it -p 3050:3050 fb_sql bash
Network defined as bridge by default. So after I activated SQL server inside container,
it is only visible on Public IP 1.2.3.4 of the MyServer, and even that not from the server itself, but rather from another computer calling Server's public IP.
I tried creating new network, but option --parent only available for -d macvlan
How can I define bridge on eth0 (local dev) rather than default eth1 (public IP dev)?
netstat -apen |grep 3050 shows:
tcp 0 0 0.0.0.0:3050 0.0.0.0:* LISTEN 0 1304464 203883/conmon

openshift 3.11 storageos networking issue

I've created an openshift 3.11 3 node cluster, 2 of which are compute
nodes. I've installed storageos on this cluster. One of the compute
nodes seems fine with the storageos installation, however the 2nd
compute node can't reach the 1st node. It appears that the error
is routing related.
the 2nd node will not route to the 1st node it appears.
[root#cortado-o1 standard]# oc get pod -n storageos
NAME READY STATUS RESTARTS AGE
storageos-47qgc 1/1 Running 0 6m
storageos-6bqqp 0/1 Running 3 7m
[root#cortado-o2 ~]# netstat -na | grep 5705
tcp6 0 0 :::5705
[root#cortado-o3 ~]# netstat -na | grep 5705
tcp 0 0 192.168.0.101:43588 192.168.0.101:5705 TIME_WAIT
tcp 0 0 192.168.0.101:43548 192.168.0.101:5705 TIME_WAIT
tcp 0 0 192.168.0.101:43522 192.168.0.101:5705 TIME_WAIT
tcp 0 0 192.168.0.101:43458 192.168.0.101:5705 TIME_WAIT
tcp 0 0 192.168.0.101:43628 192.168.0.101:5705 TIME_WAIT
tcp 0 0 192.168.0.101:43602 192.168.0.101:5705 TIME_WAIT
tcp 0 0 192.168.0.101:43562 192.168.0.101:5705 TIME_WAIT
tcp 0 0 192.168.0.101:43502 192.168.0.101:5705 TIME_WAIT
tcp 0 0 192.168.0.101:43476 192.168.0.101:5705 TIME_WAIT
tcp 0 0 192.168.0.101:43412 192.168.0.101:5705 TIME_WAIT
tcp 0 0 192.168.0.101:43430 192.168.0.101:5705 TIME_WAIT
tcp6 0 0 :::5705 :::* LISTEN
[root#cortado-o3 ~]# !nc
nc 192.168.0.102 5705
Ncat: No route to host.
[root#cortado-o3 ~]# hostname --ip-address
192.168.0.101
time="2018-11-13T04:24:38Z" level=error msg="failed to join existing cluster" action=create category=etcd endpoint="192.168.0.102,192.168.0.101" error="Get http://192.168.0.102:5705/v1/members: dial tcp 192.168.0.102:5705: connect: no route to host" module=cp
time="2018-11-13T04:24:38Z" level=info msg="not first cluster node, joining first node" action=create address=192.168.0.101 category=etcd host=cortado-o3 module=cp target=192.168.0.101
time="2018-11-13T04:24:38Z" level=error msg="failed to join existing cluster" action=create category=etcd endpoint="192.168.0.102,192.168.0.101" error="503 Service Unavailable" module=cp
time="2018-11-13T04:24:38Z" level=info msg="retrying cluster join in 5 seconds..." action=create category=etcd module=cp
any suggestions? many thanks.
I can see on your netstat output that StorageOS is bound to the port, not that they can communicate. In fact the Ncat shows that there is no route to host, so they can't connect. StorageOS needs to be able to communicate among its nodes.
The StorageOS docs have a reference about the prerequisites of the ports and how to open them. https://docs.storageos.com/docs/prerequisites/firewalls
It depends on your OpenShift installation if you use ufw, firewalld or straight ip tables.
For ufw try this:
ufw default allow outgoing
ufw allow 5701:5711/tcp
ufw allow 5711/udp
For firewalld try this:
firewall-cmd --permanent --new-service=storageos
firewall-cmd --permanent --service=storageos --add-port=5700-5800/tcp
firewall-cmd --add-service=storageos --zone=public --permanent
firewall-cmd --reload
For straight iptables:
# Inbound traffic
iptables -I INPUT -i lo -m comment --comment 'Permit loopback traffic' -j ACCEPT
iptables -I INPUT -m state --state ESTABLISHED,RELATED -m comment --comment 'Permit established traffic' -j ACCEPT
iptables -A INPUT -p tcp --dport 5701:5711 -m comment --comment 'StorageOS' -j ACCEPT
iptables -A INPUT -p udp --dport 5711 -m comment --comment 'StorageOS' -j ACCEPT
# Outbound traffic
iptables -I OUTPUT -o lo -m comment --comment 'Permit loopback traffic' -j ACCEPT
iptables -I OUTPUT -d 0.0.0.0/0 -m comment --comment 'Permit outbound traffic' -j ACCEPT
Check also the troubleshooting page of storageos for this particular issue.
https://docs.storageos.com/docs/platforms/openshift/troubleshoot/install#peer-discovery---networking
In addition, less than 3 node cluster is not supported. You can have 1 node for testing or 3+. But having 2 nodes makes impossible to ensure quorum in a distributed environment unless you use StorageOS pointing the kv store to a external etcd.

Mysql connect host change every time in the docker container,why?

In the docker container.I try to login the host mysql server. First the host ip is changed,so confused for me.But second login success. Anyone can explain this strange happening?
I login ip is 192.168.100.164,but the error info shows ip 172.18.0.4,which is the container localhost.
More info:
root#b67c39311dbb:~/project# netstat -nr
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
root#b67c39311dbb:~/project# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.4 netmask 255.255.0.0 broadcast 172.18.255.255
ether 02:42:ac:12:00:04 txqueuelen 0 (Ethernet)
RX packets 2099 bytes 2414555 (2.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1752 bytes 132863 (132.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 35 bytes 3216 (3.2 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 35 bytes 3216 (3.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Try adding --add-host="localhost:192.168.100.164" when launching docker run. But this is not a good practice to my mind. You should move your mysql database to another container and create a network between them
That is true, when we start docker container it is get their own ip for container. You need to map host port with docker container. then, when you try to connect host port it redirect to myssql docker container. Please look at https://docs.docker.com/config/containers/container-networking/
I'd suggest you to create a docker bridged network and then create your container using the --add-host as suggested by Alexey.
In a simple script:
DOCKER_NET_NAME='the_docker_network_name'
DOCKER_GATEWAY='172.100.0.1'
docker network create \
--driver bridge \
--subnet=172.100.0.0/16 \
--gateway=$DOCKER_GATEWAY \
$DOCKER_NET_NAME
docker run -d \
--add-host db.onthehost:$DOCKER_GATEWAY \
--restart=always \
--network=$DOCKER_NET_NAME \
--name=magicContainerName \
yourImage:latest
EDIT: creating a network will also simplify the communication among containers (if you plan to have it in the future) since you'll be able to use the container names instead of their IP.

How to change the default Pod address in openshift origin?

I am using Openshift Origin to setup a lab environment. But after doing openshift-ansible, it seems the default Pod address are 10.1.x.x . It conflicts with my company intranet address.
So how to change the default Pod address?
[root#openshiftorigin-master ~]# netstat -ar
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default gateway 0.0.0.0 UG 0 0 0 ens32
10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tun0
172.16.50.0 0.0.0.0 255.255.255.0 U 0 0 0 ens32
172.30.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tun0
Thanks,
James.
Pods are given addresses out of the ClusterNetworkCIDR in the master config. You will need to restart all pods after changing this value (the documentation describes other implications of changing this value on a running cluster).