Mysql connect host change every time in the docker container,why? - mysql

In the docker container.I try to login the host mysql server. First the host ip is changed,so confused for me.But second login success. Anyone can explain this strange happening?
I login ip is 192.168.100.164,but the error info shows ip 172.18.0.4,which is the container localhost.
More info:
root#b67c39311dbb:~/project# netstat -nr
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
root#b67c39311dbb:~/project# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.4 netmask 255.255.0.0 broadcast 172.18.255.255
ether 02:42:ac:12:00:04 txqueuelen 0 (Ethernet)
RX packets 2099 bytes 2414555 (2.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1752 bytes 132863 (132.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 35 bytes 3216 (3.2 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 35 bytes 3216 (3.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Try adding --add-host="localhost:192.168.100.164" when launching docker run. But this is not a good practice to my mind. You should move your mysql database to another container and create a network between them

That is true, when we start docker container it is get their own ip for container. You need to map host port with docker container. then, when you try to connect host port it redirect to myssql docker container. Please look at https://docs.docker.com/config/containers/container-networking/

I'd suggest you to create a docker bridged network and then create your container using the --add-host as suggested by Alexey.
In a simple script:
DOCKER_NET_NAME='the_docker_network_name'
DOCKER_GATEWAY='172.100.0.1'
docker network create \
--driver bridge \
--subnet=172.100.0.0/16 \
--gateway=$DOCKER_GATEWAY \
$DOCKER_NET_NAME
docker run -d \
--add-host db.onthehost:$DOCKER_GATEWAY \
--restart=always \
--network=$DOCKER_NET_NAME \
--name=magicContainerName \
yourImage:latest
EDIT: creating a network will also simplify the communication among containers (if you plan to have it in the future) since you'll be able to use the container names instead of their IP.

Related

Conflicting docker IP in elastic beanstalk deployment

I have a elastic beanstalk app which is running on the docker platform.
I am trying to connect to a private IP (172.17.57.52) which is in conflict with the docker bridge IP range.
[ec2-user#ip-172-18-1-22 ~]$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.18.1.1 0.0.0.0 UG 0 0 0 eth0
169.254.169.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.1.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0
I've gone through the documentation for Dockerrun.aws.json and there currently there is no provision to configure the network manually.
Is there a way to manually configure the docker IP range in elasticbean stalk?
I've gone through the documentation for Dockerrun.aws.json and there currently there is no provision to configure the network manually.

QEMU hostfwd works only for some ports

I compiled qemu-system-x86_64 on aarch64 host, and was able to run a x86_64 guest with a command like
qemu-system-x86_64 -m 4096 -drive file=vmimage.qcow2,if=virtio \
-boot once=c,menu=on -net nic,model=virtio-net-pci \
-net user,hostfwd=tcp::8080-:80,hostfwd=tcp::22222-:22
I could ssh into the guest using
ssh -p22222 user#localhost
Meanwhile, port 80 was not forwarded successfully.
For debugging, I used nc to listen to port 80 inside the guest
nc -l 80
Then in the host, I connected to the forwarded port
nc localhost 8080
However, it was unable to connect to guest nc .
I tried the monitor interface. When the host nc command is executed, info usernet shows following:
(qemu) info usernet
Hub 0 (#net162):
Protocol[State] FD Source Address Port Dest. Address Port RecvQ SendQ
TCP[SYN_SENT] 33 127.0.0.1 8080 10.0.2.15 80 0 0
TCP[ESTABLISHED] 21 127.0.0.1 22222 10.0.2.15 22 0 0
TCP[HOST_FORWARD] 12 * 8080 10.0.2.15 80 0 0
TCP[HOST_FORWARD] 11 * 22222 10.0.2.15 22 0 0
...
I believe the SYN_SENT (FD 33) corresponded to the host nc command, and this matched the HOST_FORWARD line (FD 12). However, it never became ESTABLISHED. And a few seconds later, nc died with Connection reset by peer. , and the FD 33 line disappeared.
If I nc localhost 22222, I can see the OpenSSH banner.
So it seems only port 22 forwarded. Any idea about the cause or how to debug?
Both host and guest had no firewalliptables configured, and SELinux is permissive.
Thanks
Edit:
As a temporary workaround, I configured a second nic, and used port 22 of the new interface for forwarding my service. I also switch to the newer -nic option, but hostfwd still worked for port 22 only.
qemu-system-x86_64 -m 4096 -drive file=vmimage.qcow2,if=virtio \
-boot once=c,menu=on \
-nic user,model=virtio-net-pci,hostfwd=tcp::60022-:22 \
-nic user,model=virtio-net-pci,net=10.0.3.0/24,hostfwd=tcp::8080-10.0.3.15:22
To forward successfully, I also need to
Configure sshd to listen to port 22 the first nic only.
Configure my service to listen to port 22 of the second nic.
Configure the second nic to use a different network. Otherwise, both nics were assigned the same IP (10.0.2.15. I may better hardcode the IP for both nics.)
The problem was actually about firewall. My VM (based on Oracle Linux 8.5 on Oracle Linux VM Templates) actually had firewall rules in both iptables and nft. After disabling both iptables and nft, the port forward worked.

OKD 4.5 single node installation

I'm trying to build an OKD 4.5 single node cluster following Craig Robinson blog post (at https://medium.com/swlh/guide-okd-4-5-single-node-cluster-832693cb752b). I faced with this issue first on bootstrap node, but after deleting and recreating the whole process again, it booted up successfully. But the same issue happened again while preparing control plane master node. After initial coreos download (which proves webserver is working fine), I get this recurring GET error message over and over again:
ignition[xxx]: GET error: Get "https://api-int.lab.okd.local:22623/config/master": EOF
And this is my control plane node config:
ip=10.106.31.233::10.106.31.1:255.255.255.0:::none nameserver=10.106.31.231 coreos.inst.install_dev=/dev/sda coreos.inst.image_url=http://10.106.31.231:8080/okd4/ fcos.raw.xz coreos.inst.ignition_url=http://10.106.31.231:8080/okd4/master.ign
IPs are:
okd-services: 10.106.31.231 ;
bootstrap: 10.106.31.232 ;
control-plane: 10.106.31.233
I can reach the http://10.106.31.231:8080/okd4 address from remote pc and list the contents including master.ign file. Also pinging "api-int.lab.okd.local" is successful too. firewalld open ports on okd-services node are:
[root#okd4-services ~]# ss -ltu
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp UNCONN 0 0 0.0.0.0:hostmon 0.0.0.0:*
udp UNCONN 0 0 10.106.31.231:domain 0.0.0.0:*
udp UNCONN 0 0 127.0.0.1:domain 0.0.0.0:*
udp UNCONN 0 0 127.0.0.53%lo:domain 0.0.0.0:*
udp UNCONN 0 0 [::]:hostmon [::]:*
udp UNCONN 0 0 [::]:domain [::]:*
tcp LISTEN 0 128 0.0.0.0:ssh 0.0.0.0:*
tcp LISTEN 0 4096 127.0.0.1:rndc 0.0.0.0:*
tcp LISTEN 0 4096 0.0.0.0:https 0.0.0.0:*
tcp LISTEN 0 4096 0.0.0.0:22623 0.0.0.0:*
tcp LISTEN 0 4096 0.0.0.0:cslistener 0.0.0.0:*
tcp LISTEN 0 4096 0.0.0.0:sun-sr-https 0.0.0.0:*
tcp LISTEN 0 4096 0.0.0.0:hostmon 0.0.0.0:*
tcp LISTEN 0 4096 0.0.0.0:http 0.0.0.0:*
tcp LISTEN 0 10 10.106.31.231:domain 0.0.0.0:*
tcp LISTEN 0 10 127.0.0.1:domain 0.0.0.0:*
tcp LISTEN 0 4096 127.0.0.53%lo:domain 0.0.0.0:*
tcp LISTEN 0 128 [::]:ssh [::]:*
tcp LISTEN 0 4096 [::1]:rndc [::]:*
tcp LISTEN 0 4096 [::]:hostmon [::]:*
tcp LISTEN 0 511 *:webcache *:*
tcp LISTEN 0 10 [::]:domain [::]:*
the output of the dig test on okd-services node is:
[root#okd4-services ~]# dig -x 10.106.31.231
; <<>> DiG 9.11.25-RedHat-9.11.25-2.fc33 <<>> -x 10.106.31.231
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60620
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;231.31.106.10.in-addr.arpa. IN PTR
;; ANSWER SECTION:
231.31.106.10.in-addr.arpa. 604800 IN PTR api-int.lab.okd.local.
231.31.106.10.in-addr.arpa. 604800 IN PTR api.lab.okd.local.
231.31.106.10.in-addr.arpa. 604800 IN PTR okd4-services.okd.local.
;; SERVER: 127.0.0.53#53(127.0.0.53)
I deleted and recreated the control plane to see if it solved the issue, but was not successful. Any idea what this issue means?
I had exact the same issue with this article. The problem was with bootstrap node that cannot finish initialization process. First of all initialize bootstrap node and sure that the process was finished. The simplest way to check what's going on with a node:
Connect to a node with ssh core#<NODE'S-IP> from the machine where you generate ssh-certificate
ssh will provide you useful information:
This is the bootstrap node; it will be destroyed when the master is
fully up.
The primary services are release-image.service followed by
bootkube.service. To watch their status, run e.g.
journalctl -b -f -u release-image.service -u bootkube.service This
is the bootstrap node; it will be destroyed when the master is fully
up.
The primary services are release-image.service followed by
bootkube.service. To watch their status, run e.g.
journalctl -b -f -u release-image.service -u bootkube.service Fedora
CoreOS 32.20200629.3.0 Tracker:
https://github.com/coreos/fedora-coreos-tracker Discuss:
https://discussion.fedoraproject.org/c/server/coreos/
First of all you have to check journalctl -b -f -u release-image.service -u bootkube.service because it's main units. However, there're might be other issues so check failed/not finished systemd's service units with systemctl list-units --type=service and journalctl -f -u <UNIT-NAME> to follow the process
Whole process will spend some time (~20-40 mins in my case) and there're might be some timeouts in journal's logs so just wait the final status of the unit.
Only then you can start initializing of control-plane node
Finally, my issue was in wrong fedora-core-os version because you can use 32 version only. For me it's installed okay with:
fedora-coreos-32.20201104.3.0-metal.x86_64.raw.xz
fedora-coreos-32.20201104.3.0-metal.x86_64.raw.xz.sig
fedora-coreos-32.20201104.3.0-live.x86_64.iso
openshift-client-linux-4.5.0-0.okd-2020-10-15-235428.tar.gz
openshift-install-linux-4.5.0-0.okd-2020-10-15-235428.tar.gz

Bootnode public address

I am trying to deploy an small private Ethereum network using geth. I have a server running geth configured as a miner in my local network. In the other side I have a droplet in DigitalOcean that I want to use as a bootnode to connect future nodes to my network.
I have executed the following commands in my DigitalOcean Droplet:
bootnode --genkey=boot.key
bootnode --nodekey=boot.key --addr:$(MY_PUBLICIP):30301
And I get the following output from the command instead of my public key that I need to introduce as my enode reference in the future nodes:
INFO [10-29|18:13:32.851] New local node record seq=1 id=785b198c28c625f8 ip=<nil> udp=0 tcp=0
Could please somebody tell how to interpret the output from the bootnode command?
I introduced a netstat command in order to find out whether or not the program is opening a port.
ether#ubuntu-s-1vcpu-2gb-ams3-01:~$ netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 localhost:domain 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN
tcp6 0 0 [::]:ssh [::]:* LISTEN
udp 6912 0 localhost:domain 0.0.0.0:*
udp 0 0 ubuntu-s-1vcpu-2g:30301 0.0.0.0:*
raw6 0 0 [::]:ipv6-icmp [::]:* 7
raw6 0 0 [::]:ipv6-icmp [::]:* 7
I'm using a standard Ubuntu 18.04 configuration of the basic droplet DigitalOcean, I would like to know if I should configure something else besides the usual compilation of the geth code in order to make a bootnode work.
Thanks any help is welcomed.

How to change the default Pod address in openshift origin?

I am using Openshift Origin to setup a lab environment. But after doing openshift-ansible, it seems the default Pod address are 10.1.x.x . It conflicts with my company intranet address.
So how to change the default Pod address?
[root#openshiftorigin-master ~]# netstat -ar
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default gateway 0.0.0.0 UG 0 0 0 ens32
10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tun0
172.16.50.0 0.0.0.0 255.255.255.0 U 0 0 0 ens32
172.30.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tun0
Thanks,
James.
Pods are given addresses out of the ClusterNetworkCIDR in the master config. You will need to restart all pods after changing this value (the documentation describes other implications of changing this value on a running cluster).