I have a elastic beanstalk app which is running on the docker platform.
I am trying to connect to a private IP (172.17.57.52) which is in conflict with the docker bridge IP range.
[ec2-user#ip-172-18-1-22 ~]$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.18.1.1 0.0.0.0 UG 0 0 0 eth0
169.254.169.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.1.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0
I've gone through the documentation for Dockerrun.aws.json and there currently there is no provision to configure the network manually.
Is there a way to manually configure the docker IP range in elasticbean stalk?
I've gone through the documentation for Dockerrun.aws.json and there currently there is no provision to configure the network manually.
I am trying to deploy an small private Ethereum network using geth. I have a server running geth configured as a miner in my local network. In the other side I have a droplet in DigitalOcean that I want to use as a bootnode to connect future nodes to my network.
I have executed the following commands in my DigitalOcean Droplet:
bootnode --genkey=boot.key
bootnode --nodekey=boot.key --addr:$(MY_PUBLICIP):30301
And I get the following output from the command instead of my public key that I need to introduce as my enode reference in the future nodes:
INFO [10-29|18:13:32.851] New local node record seq=1 id=785b198c28c625f8 ip=<nil> udp=0 tcp=0
Could please somebody tell how to interpret the output from the bootnode command?
I introduced a netstat command in order to find out whether or not the program is opening a port.
ether#ubuntu-s-1vcpu-2gb-ams3-01:~$ netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 localhost:domain 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN
tcp6 0 0 [::]:ssh [::]:* LISTEN
udp 6912 0 localhost:domain 0.0.0.0:*
udp 0 0 ubuntu-s-1vcpu-2g:30301 0.0.0.0:*
raw6 0 0 [::]:ipv6-icmp [::]:* 7
raw6 0 0 [::]:ipv6-icmp [::]:* 7
I'm using a standard Ubuntu 18.04 configuration of the basic droplet DigitalOcean, I would like to know if I should configure something else besides the usual compilation of the geth code in order to make a bootnode work.
Thanks any help is welcomed.
In the docker container.I try to login the host mysql server. First the host ip is changed,so confused for me.But second login success. Anyone can explain this strange happening?
I login ip is 192.168.100.164,but the error info shows ip 172.18.0.4,which is the container localhost.
More info:
root#b67c39311dbb:~/project# netstat -nr
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
root#b67c39311dbb:~/project# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.4 netmask 255.255.0.0 broadcast 172.18.255.255
ether 02:42:ac:12:00:04 txqueuelen 0 (Ethernet)
RX packets 2099 bytes 2414555 (2.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1752 bytes 132863 (132.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 35 bytes 3216 (3.2 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 35 bytes 3216 (3.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Try adding --add-host="localhost:192.168.100.164" when launching docker run. But this is not a good practice to my mind. You should move your mysql database to another container and create a network between them
That is true, when we start docker container it is get their own ip for container. You need to map host port with docker container. then, when you try to connect host port it redirect to myssql docker container. Please look at https://docs.docker.com/config/containers/container-networking/
I'd suggest you to create a docker bridged network and then create your container using the --add-host as suggested by Alexey.
In a simple script:
DOCKER_NET_NAME='the_docker_network_name'
DOCKER_GATEWAY='172.100.0.1'
docker network create \
--driver bridge \
--subnet=172.100.0.0/16 \
--gateway=$DOCKER_GATEWAY \
$DOCKER_NET_NAME
docker run -d \
--add-host db.onthehost:$DOCKER_GATEWAY \
--restart=always \
--network=$DOCKER_NET_NAME \
--name=magicContainerName \
yourImage:latest
EDIT: creating a network will also simplify the communication among containers (if you plan to have it in the future) since you'll be able to use the container names instead of their IP.
In summary, although I've set a firewall rule that allows tcp:80, my GCE instance, which is on the "default" network, is not accepting connections to port 80. It appears only port 22 is open on my instance. I can ping it, but can't traceroute to it in under 64 hops.
What follows is my investigation that led me to those conclusions.
gcloud beta compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
default-allow-http default INGRESS 1000 tcp:80
default-allow-https default INGRESS 1000 tcp:443
default-allow-icmp default INGRESS 65534 icmp
default-allow-internal default INGRESS 65534 tcp:0-65535,udp:0-65535,icmp
default-allow-rdp default INGRESS 65534 tcp:3389
default-allow-ssh default INGRESS 65534 tcp:22
temp default INGRESS 1000 tcp:8888
gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
ssrf3 us-west1-c f1-micro true 10.138.0.4 35.197.33.182 RUNNING
gcloud compute instances describe ssrf3
...
name: ssrf3
networkInterfaces:
- accessConfigs:
- kind: compute#accessConfig
name: external-nat
natIP: 35.197.33.182
type: ONE_TO_ONE_NAT
kind: compute#networkInterface
name: nic0
network: https://www.googleapis.com/compute/v1/projects/hack-170416/global/networks/default
networkIP: 10.138.0.4
subnetwork: https://www.googleapis.com/compute/v1/projects/hack-170416/regions/us-west1/subnetworks/default
...
tags:
fingerprint: 6smc4R4d39I=
items:
- http-server
- https-server
I ssh into 35.197.33.182 (which is the ssrf3 instance) and run:
sudo nc -l -vv -p 80
On my local machine, I run:
nc 35.197.33.182 80 -vv
hey
but nothing happens.
So I try to ping the host. That looks healthy:
ping 35.197.33.182
PING 35.197.33.182 (35.197.33.182): 56 data bytes
64 bytes from 35.197.33.182: icmp_seq=0 ttl=57 time=69.172 ms
64 bytes from 35.197.33.182: icmp_seq=1 ttl=57 time=21.509 ms
Traceroute quits after 64 hops, without reaching the 35.197.33.182 destination.
So I check which ports are open with nmap:
nmap 35.197.33.182
Starting Nmap 7.12 ( https://nmap.org ) at 2017-06-18 16:39 PDT
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 3.06 seconds
nmap 35.197.33.182 -Pn
Starting Nmap 7.12 ( https://nmap.org ) at 2017-06-18 16:39 PDT
Nmap scan report for 182.33.197.35.bc.googleusercontent.com (35.197.33.182)
Host is up (0.022s latency).
Not shown: 999 filtered ports
PORT STATE SERVICE
22/tcp open ssh
Nmap done: 1 IP address (1 host up) scanned in 6.84 seconds
… even when I’m running nc -l -p 80 on 35.197.33.182.
Ensure that VM level firewall is not intervening. For example, Container-Optimized OS is a bit special in comparison to all other default images:
By default, the Container-Optimized OS host firewall allows only outgoing connections, and accepts incoming connections only through the SSH service. To accept incoming connections on a Container-Optimized OS instance, you must open the ports your services are listening on.
https://cloud.google.com/container-optimized-os/docs/how-to/firewall
Checking the two check boxes "Allow HTTP traffic" and "Allow HTTPS traffic" did the trick. This created two Firewall rules, that opened the ports 80 and 443.
Manually adding rules for those port didn't work for some reason, but it worked with checking the boxes.
On a quick glance, your setup seems to be correct.
You have allowed INGRESS tcp:80 for all instances in the default network.
Your VM is on the default network.
Traceroute will not give a good indication when you have VMs running on Cloud providers, because of the use of SDNs, virtual networks and whole bunch of intermediate networking infrastructure unfortunately.
One thing I notice is that your instance has 2 tags http-server and https-server. These could be used by some other firewall rules possibly which is somehow blocking traffic to your VM's tcp:80 port.
There are other variables in your setup and I'm happy to debug if needed further.
Tag based firewall rules
You can try tag based firewall rules which will apply the firewall rule only to instances which have the specified target tag.
Network tags are used by networks to identify which instances are
subject to certain firewall rules and network routes. For example, if
you have several VM instances that are serving a large website, tag
these instances with a shared word or term and then use that tag to
apply a firewall rule that allows HTTP access to those instances. Tags
are also reflected in the metadata server, so you can use them for
applications running on your instances. When you create a firewall
rule, you can provide either sourceRanges or sourceTags but not both.
# Add a new tag based firewall rule to allow ingress tcp:80
gcloud compute firewall-rules create rule-allow-tcp-80 --source-ranges 0.0.0.0/0 --target-tags allow-tcp-80 --allow tcp:80
# Add the allow-tcp-80 target tag to the VM ssrf3
gcloud compute instances add-tags ssrf3 --tags allow-tcp-80
It might take a few seconds to couple of minutes for the changes to take effect.
NOTE: Since you're opening up ports of VM's external IPs to the internet, take care to restrict access accordingly as per the needs of your application running on these ports.
After lots of trail and error, the following worked for me on ubuntu-1404-trusty-v20190514
, with a nodejs app listening on port 8080. Accept port 80 and 8080, and then redirect 80 to 8080.
sudo iptables -A INPUT -i eth0 -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -i eth0 -p tcp --dport 8080 -j ACCEPT
sudo iptables -t nat -A OUTPUT -o lo -p tcp --dport 80 -j REDIRECT --to-port 8080
sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
Incase you are a windows server instance , You could try to turn off the Windows Defender and check if it's blocking the incoming connection.
On Fedora 21 I'm not getting a new default route set when I establish an OpenVPN connection. On Fedora 20 it was fine (with the exact same .ovpn configuration file).
Any ideas?
Add this to your .ovpn configuration file.
route-delay 5
I experienced this myself. In my .ovpn client config file, I have redirect-gateway def1, yet route -n still showed my default route was not changed.
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 1024 0 0 em1
10.1.0.0 172.22.22.57 255.255.0.0 UG 20 0 0 tun0
A traceroute verified this. So I found a workaround to manually create the route for now. Below, the first line shows how to construct the needed command, and the second one is an example command based on my example route output above.
sudo route add -net {Destination1} netmask {Genmask1} gw {Gateway2} dev {Iface2}
sudo route add -net 0.0.0.0 netmask 0.0.0.0 gw 172.22.22.57 dev tun0
When you're done with VPN, run the same thing again, except with add replaced by del.
sudo route del -net 0.0.0.0 netmask 0.0.0.0 gw 172.22.22.57 dev tun0
I don't know why OpenVPN's behavior changed. I did try going to Fedora's Settings > Network > Wired > gear > IPv4 > Routes and turning off Automatic, but that did not help.