Under ubuntu 13.04 with docker 0.7.2 when I create a container via Dockerfile or interactively : the network interface generated veth** does not have an ipv4 address but an ipv6 one.
How can I get a ipv4 address ? Is there some things I missed. Does this depends from my network configuration ?
Same behaviour on a 12.04 box.
The network interface veth… in the host shouldn't have an IPv4 address. Those virtual interfaces work in pairs:
One interface will be in the container, it will be named eth0, and will have an IPv4 address. For all purposes, it looks like a normal interface.
The other half of the pair is the veth… interface. It will be in the host, and won't have an IPv4 address.
Those two interfaces are connected together: any packet sent on an interface will appear as being received by the other. You can imagine that they are connected by a cross-over cable, if that helps :-)
The fact that the veth… interface has an IPv6 address is just because when IPv6 is enabled, all interfaces receive at least a link-local address. But this address is essentially useless in that case.
Restart the docker service once. This will show the ipv4 address in docker0 link
sudo systemctl restart docker.service
Please keep in mind that the running containers will be closed down.
You can check the ip by using ifconfig command
Related
I've created an Oracle Cloud infrastructure compute instance running Ubuntu 20.04. I am trying to open port 19132.
As per another question I found
Opening port 80 on Oracle Cloud Infrastructure Compute node
I've created a public subnet which has an internet gateway and added ingress rules for port 19132 (in the security lists)
netstat looks good
netstat -tulpn
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:19132 0.0.0.0:* 1007/./bedrock_serv
I installed ufw and added rules to allow 19132 but I still can't connect to it from the outside world. Can anyone point out where I am going wrong?
I got the same issue on the Oracle cloud.
Here is what works for me;
First, install firewalld
sudo apt install firewalld
Then open the port in public zone;
sudo firewall-cmd --zone=public --permanent --add-port=19132/tcp
Finally, reload firewalld
sudo firewall-cmd --reload
Looks like you need to have a Public IP configured on that VM for it to be reachable from the internet.
Please look at
https://docs.cloud.oracle.com/en-us/iaas/Content/Network/Tasks/managingpublicIPs.htm
For an instance to communicate directly with the internet, all of the following are required:
The instance must be in a public subnet.
The instance must have a public IP address.
The instance's VCN must have an internet gateway.
The public subnet must have route tables and security lists configured accordingly.
You haven't mentioned anything about the route table. If missing add to it a route with destination=0.0.0.0/0 and target=the Internet Gateway.
Two questions come to mind:
You have specified two rules, one for TCP and one for UDP.
Your netstat shows that something is listening for UDP traffic. Is
there also something listening on TCP or are you using UDP only for
the test?
Can you tell us anything about the traffic characteristics
on this port? I'm asking because if it is UDP traffic the only way
for connection tracking to work is to track the source/dest IP and
port. Since the port will not be present in fragments, the traffic
will be dropped. This could be happening on the ingress or egress
side. To verify, you could create test ingress/egress rules for all
UDP traffic to/from your test IP.
Since your ingress rules are stateful, the egress rules shouldn't matter but it wouldn't hurt to double check them. If none of these things work, you might try a tool like echoping to get more insight into whether or not the traffic is having trouble on the ingress or egress side.
Please check the order of your IPtables rules. Could you post the following command's output for Input chain.
sudo iptables -S INPUT
I have seen Iptables rules as the single prominent reason for these issues.
Regards
Muthu
I think you have to allow user or add user who can connect like this:
create user 'user'#'publicIP' identified by 'password';
grant all privileges on *.* to 'user'#'publicIP' with grant option;
flush privileges;
Here publicIP can be '0.0.0.0' or your system IP address.
Don't use '0.0.0.0' as it is open to all, I have faced various breaches on my GCP machine which leads to account block.
I am working on a project that requires me to have multiple network interfaces. I followed the documentation and created three interfaces. I also changed the firewall rules. But even after changing the firewall rules, I am not getting a reply for an ICMP request to the second interface's external IP.
As seen in the screenshot I have allowed all protocols from anywhere to any instance in my network enter image description here
If you look at the routing table of your VM instance, you'll see that the default route is configured on the primary network interface eth0:
vm-instance:$ ip route
default via 10.156.0.1 dev eth0
...
Whether an Ephemeral or a Static External IP address is configured, this External IP is unknown to the operating system of the VM instance. The External IP address is mapped to the VM's Internal address transparently by VPC. You can verify this with the command
vm-instance:$ ip -4 address show
You'll see that there are no External IPs bound.
Furthermore, IP packet forwarding is disabled both between the network cards of the VM instance and network interfaces of Google-provided Linux. The commands below can verify that:
CloudShell:$ gcloud compute instances describe vm-instance --zone=your-zone | grep canIpForward
vm-instance:$ sudo sysctl net.ipv4.ip_forward
Therefore when a ping packet is received by a secondary interface, it can't reply.
To explore this behavior a bit, you may launch tcpdump on the VM instance so that listen on a secondary interface, for example eth1:
vm-instance:$ sudo apt-get install tcpdump
vm-instance:$ sudo tcpdump -i eth1
then find out External IP of your Cloud Shell appliance and ping the secondary External IP of your VM instance from Cloud Shell:
CloudShell:$ curl ifconfig.me/ip
CloudShell:$ ping [secondary_ip_of_vm_instance]
You'll see in the tcpdump output on the console of your VM instance how ICMP packets are arriving to the eth1 interface from the External IP address of your workstation. But they are not replied.
Google provides explanation of this behavior in the Troubleshooting section of the VPC documentation and suggests possible workarounds:
Virtual Private Cloud > Doc > Creating instances with multiple network interfaces > Troubleshooting > I am not able to connect to secondary interface using external IP:
The DHCP server programs a default route only on the primary network
interface of the VM. If you want to connect to the secondary interface
using an external IP, there are two options. If you only need to
connect outside the network on the secondary network interface, you
can set a default route on that network interface. Otherwise, you can
use Configuring Policy
Routing
to configure a separate routing table using source-based policy
routing in your VM.
Im trying to connect to a postgres database, from a springboot application deployed in minishift.
The postgres server is running on the same host that minishift is running on.
I've tried setting the postgres serve to listen on a specific IP address, and use this same address in the springboot jdbc connection url but I still get org.postgresql.util.PSQLException: Connection to 172.99.0.1:5432 refused
I've also tried using 10.0.2.2
Also tried, in /etc/postgresql/9.5/main/postgresql.conf, setting:
listen_addresses = '*'
How can I connect to a database external to minishift, running on same host?
Besides the answer referenced in my comment, which suggests to make your database listen on the IP address of the Docker bridge, you could make your pod use the network stack of your host. This way you could reach Postgres on the loopback. This works only if can guarantee that the pod will always run on the same host as the database.
The Kubernetes documentation discourages using hostNetwork. If you understand the consequences you can enable it as in this example.
If a pod inside kubernetes can't see the IP address from the host then I guess its an underlying firewall or networking issue. Try opening a shell inside the pod...
kubectl exec -it mypodname bash
Then trying to ping, telnet, curl, wget or whatever to see if you can see the IP address.
It sounds like something's wrong with the networking setup of your minishift. It might be worth raising an issue with minishift: https://github.com/minishift/minishift/issues/new
If you can find an IP address on the host which is accessible from a docker pod you can create a Kubernetes Service and then an Endpoint for the service with the IP address of the database on your host; then you can use the usual DNS discovery of kubernetes services (i.e. using the service name as the DNS name) which will then resolve to the IP address. Over time you could have multiple IP addresses for failover etc.
See: https://kubernetes.io/docs/user-guide/services/#without-selectors
Then you can use Services to talk to all your actual network endpoints with your application code completely decoupled on if the endpoints are implemented inside kubernetes, outside with load balancing baked in!
I'm tring to test my Windows Phone 8 app on an actual device, but I need the IP Address of my computer in order to do this. When I type 'ipconfig' in the command prompt, it shows two different IPv4 addresses and I can't tell which is the correct one for my computer that will allow me to test the app on a device.
Command Prompt Output:
Ethernet Adapter vEthernet <Internal Ethernet Port Windows Phone Emulator Internal Switch>:
IPv4 Address......: 169.254.xx.xx
Ethernet Adapter vEthernet <New Virtual Switch>:
IPv4 Address......: 192.168.x.xxx
I'm having a heck of a time getting this app to actually work on a device so if there is anything you see that is off, by all means let me know. My concern in this question though, is which of these is the true IP Address of my computer?
The one starting with 169.254 you can safely ignore. If a device does not receive an IP from any DHCP server within a few seconds, it creates its own, starting with 169.254.x.x, but you can't reach anything with such an IP, nor can anything reach your device.
The second one is the real one in your case. 192.168.x.x means you are part of a private network (but it could also be 10.x.x.x).
If you see multiple IPv4 addresses listed in command prompt as a result of ipconfig command, then the IPv4 address that has Default Gateway is your main or current device or computer IPv4 address.
The first one is your computer's IP address, the second one is for application or OS virtualization.
I'm trying to configure a remote OpenFlow controller over an interface which is also part of the bridge OpenVswitch is managing. I am not using mininet; rather, I have a real VM host (supporting a few qemu-kvm VM's) with a real ethernet port. I want the tap interfaces plus the ethernet port to all be in the same bridge and managed by OVS. The OpenFlow controller resides on a different host, reachable only through the physical ethernet port. So far I have set the remote controller for the bridge as well as put the failure mode into "standalone". Unfortunatley the network is simply not coming up after a reboot (NB: before I lost connectivity I did verify that traffic was flowing between the VM host and the OF controller host on port 6633). It seems that, at a minimum, I need to update the OVS database with an "in-band" setting in some table, but I'm not sure how to do this or if this will be sufficient (along with the things I've already done). With mininet, setting this "in-band" configuration appears to be handled by the "topo" command, but (obviously) I can't do it this way. Does anyone have any experience with this kind of an OVS configuration?
Try this :
#ovs-vsctl add-br br1
#ifconfig br1 10.1.2.11 netmask 255.255.255.0
#ovs-vsctl set-controller br1 tcp:<controller-IP>:6633
You will be able to see the ovs connected to controller.