Configure in-band OpenFlow Controller with OVS not in miniet - qemu

I'm trying to configure a remote OpenFlow controller over an interface which is also part of the bridge OpenVswitch is managing. I am not using mininet; rather, I have a real VM host (supporting a few qemu-kvm VM's) with a real ethernet port. I want the tap interfaces plus the ethernet port to all be in the same bridge and managed by OVS. The OpenFlow controller resides on a different host, reachable only through the physical ethernet port. So far I have set the remote controller for the bridge as well as put the failure mode into "standalone". Unfortunatley the network is simply not coming up after a reboot (NB: before I lost connectivity I did verify that traffic was flowing between the VM host and the OF controller host on port 6633). It seems that, at a minimum, I need to update the OVS database with an "in-band" setting in some table, but I'm not sure how to do this or if this will be sufficient (along with the things I've already done). With mininet, setting this "in-band" configuration appears to be handled by the "topo" command, but (obviously) I can't do it this way. Does anyone have any experience with this kind of an OVS configuration?

Try this :
#ovs-vsctl add-br br1
#ifconfig br1 10.1.2.11 netmask 255.255.255.0
#ovs-vsctl set-controller br1 tcp:<controller-IP>:6633
You will be able to see the ovs connected to controller.

Related

Load Balancer not able to connect with backend

I have deployed the Spring boot app on the OCI compute and its comping up nicely. Compute is created with public ip and have the security list updated to allow connections from internet. But, I wasn't able to hit the end point from internet. For that reason, I thought of configuring the load balancer.
Created load balancer in a separate subnet(10.0.1.0/24), routing table and security list. Configured the LB's security list to send all protocol packets to compute's CIDR(10.0.0.0/24) and configured compute's security list to accept the packets from LB. I was expecting LB to make connection with back end. But, its not.
I am able to hit the LB from internet :-
Lb's routing table with all ips routed through internet gateway. There is no routing defined for compute's CIDR as its in the VCN.
LB has its own security list, which has allowed out going packets to compute and incoming from internet as below:
Compute's security list accepting packet's from LB:
Let me know, if I am missing something here.
My internet gateway :-
My backend set connection configuration from LB:
LB fails to make connection with backend, there seems to be no logging info available :
App is working fine , if I access from the compute node :
The LB has a health check that tests the connection to your service. If it fails, the LB will keep your backend out of rotation and give you the critical health like you're seeing.
You can get to it by looking at the backend set and clicking the Update Health Check button.
Edit:
Ultimately I figured it out, you should run the following commands on your backend:
sudo firewall-cmd --permanent --add-port=8080/tcp
sudo firewall-cmd --reload
Use the port that you configured your app to listen on.
I used httpd instead of spring, but I also did the following
sudo semanage fcontext -a -t httpd_sys_content_t "/var/www/html(/.*)?"
sudo restorecon -F -R -v /var/www/html
I'm not really too familiar with selinux but you may need to do something similar for your application.
Additionally, setting up a second host in the same subnet to login to and test connecting to the other host will help troubleshooting, since it will verify if your app is accessible at all outside the host that it's on. Once it is, the LB should come up fine.
TL;DR In my case it helped to switch the Security List rules from stateful to stateless on the 2 relevant subnets (where the loadbalancer was hosted and where the backends were located).
In our deployment I had a loadbalancer with public IP located on one subnet, while the backend to this loadbalancer was on another subnet. Both subnets had one ingress and one egress rule - to allow everything (i.e. 0.0.0.0/0 and all ports allowed). The backends were still not reachable from the loadbalancer and the healthchecks were failing.
Even despite the fact that in my case as per the documentation switching between stateful and stateless should not have an effect, it solved my issue.

Connect to host postgres db from minishift

Im trying to connect to a postgres database, from a springboot application deployed in minishift.
The postgres server is running on the same host that minishift is running on.
I've tried setting the postgres serve to listen on a specific IP address, and use this same address in the springboot jdbc connection url but I still get org.postgresql.util.PSQLException: Connection to 172.99.0.1:5432 refused
I've also tried using 10.0.2.2
Also tried, in /etc/postgresql/9.5/main/postgresql.conf, setting:
listen_addresses = '*'
How can I connect to a database external to minishift, running on same host?
Besides the answer referenced in my comment, which suggests to make your database listen on the IP address of the Docker bridge, you could make your pod use the network stack of your host. This way you could reach Postgres on the loopback. This works only if can guarantee that the pod will always run on the same host as the database.
The Kubernetes documentation discourages using hostNetwork. If you understand the consequences you can enable it as in this example.
If a pod inside kubernetes can't see the IP address from the host then I guess its an underlying firewall or networking issue. Try opening a shell inside the pod...
kubectl exec -it mypodname bash
Then trying to ping, telnet, curl, wget or whatever to see if you can see the IP address.
It sounds like something's wrong with the networking setup of your minishift. It might be worth raising an issue with minishift: https://github.com/minishift/minishift/issues/new
If you can find an IP address on the host which is accessible from a docker pod you can create a Kubernetes Service and then an Endpoint for the service with the IP address of the database on your host; then you can use the usual DNS discovery of kubernetes services (i.e. using the service name as the DNS name) which will then resolve to the IP address. Over time you could have multiple IP addresses for failover etc.
See: https://kubernetes.io/docs/user-guide/services/#without-selectors
Then you can use Services to talk to all your actual network endpoints with your application code completely decoupled on if the endpoints are implemented inside kubernetes, outside with load balancing baked in!

Cannot access Google Cloud Compute Instance External IP

I have set up an Google Cloud Compute Instance:
Machine type
n1-standard-1 (1 vCPU, 3.75 GB memory)
CPU platform
Intel Haswell
Zone
us-east1-c
I can ssh in using the external address.
I have installed the vncserver and can access it on port 5901 from localhost as well as the internal IP.
I am trying to access it from the static, external IP address but it is not working.
I have configured the firewall to open to port to 0.0.0.0/0, but it is not reachable.
Can anyone help?
------after further investigation from the tips from the two answers (thanks, both!), I have a partial answer:
The Google Cloud Compute instance was set, by default, to not allow
HTTP traffic. I reset the configuration to allow HTTP traffic. I
then tried the troubleshooting tip to run a small HTTP service in
python. I was able to get a ressponse from the service over the
internet.
The summary of the current situation is as follows:
The external IP address can be reached
It is enabled and working for SSH
It is enabled and working for HTTP
It does not seem to allow traffic from vncserver
Any idea how to configure the compute instance to allow for vncserver traffic?
If you already verified that Google Firewall or your VM are not blocking packets, you must make sure that VNC service is configured to listen on the external IP address.
You can always use a utility like nmap outside Google project to reveal information on the port status.
enable http/https traffic form the firewall as per the need. it will work!!
The Google Cloud Compute instance was set, by default, to not allow HTTP traffic. I reset the configuration to allow HTTP traffic. I then tried the troubleshooting tip to run a small HTTP service in python. I was able to get a response from the service over the internet.
As such, the original question is answered, I can access Google Cloud Compute Instance External IP. My wider issue is still not solved, but I will post a new, more specific question about this issue
TLDR: make sure you are requesting http not https
In my case i was following the link from my CE instance's External Ip property which takes you directly to the https version and i didn't set up https, so that was causing the 'site not found' error.
Create an entry in your local ssh config file as below with mentioned local forward port. In my case its an example of yarn's IP, which I want to access in browser.
Host hadoop
HostName <External-IP>
User <Local-machine-username>
IdentityFile ~/.ssh/<private-key-for-above-user>
LocalForward 8089 <Internal-IP>:8088
In addition to having the firewall rules to allow HTTP traffic in both Google Cloud Platform and within the OS of the instance, make sure you install a web server such as Apache or Nginx.
After installing the web server, you connect to the instance using SSH and verify you do not get a failed connection with the following command:
$ sudo wget http://localhost
If the connection is positive, it means that you can access your external URL:
http://<IP-EXTERNAL-VM>
Usually there are two main things to check.
1. Port
By default, only port 80, 443 and ICMP are exposed. If your server is running on a different port, create a record for the same.
2. Firewall
Make sure you are allowing http and https traffic based on your need.
oua re
For me the problem was that I set up the traffic for the firewall rule to be 'Egress' instead of 'Ingress'.
If anyone already initiated 'https'
just disable it and check again.

Why is connection failing when port-forwarding with dynamic dns in same network

I have a MySQL database running on my raspberry pi.
To access it I use dynamic DNS (duckdns) when I am outside of my network, but I would like to access it with same dynamic domain name when I am inside my network. However it is not working and I always get connection refused.
I would like somehow enable it so I do not have to change in app.config MySQL server address from my dynamic domain to localhost when I am inside my local network.
You'll need a gateway router that supports NAT hairpinning. Many consumer-grade units (and some supposedly commercial-grade equipment) doesn't support this. Either yours doesn't, or you need to find an option to enable it.
When you try to connect to the public IP address from inside the network, the router probably assumes that you want to connect to the router itself.
My cable modem's built-in router at home understands how to do this. When I access my server from the laptop, and connect to the public IP from inside, the router (inside the cable modem) does a transformation on the packets so that my server sees my connection coming from the router's IP address, not my laptop's IP address.
This is what has to happen, because when the server responds, it will respond to the machine that connected to it. If it responded to the laptop's address, the laptop would reject the traffic, since it would be coming from ther server's internal IP, which is not the IP address I connected to. So, it responds to the router, which does a second transform on the packet address, replacing the server's internal IP with the external IP. Remembering the session from previous traffic, the router then sends the packet back to the laptop.
Ultimately this setup can't possibly work for you without the complicity of your router, which may not have that capability.
Some routers, however, have a DNS proxy that will allow you to create static entries. My former DSL modem could not hairpin NAT connections, but it had a way to create DNS entries that would be used to respond to internal DNS queries for a specific host... with a different IP than the one that DNS otherwise provided. That's an alternative workaround if the router supports it.

Performance of local domain vs localhost

Is there a performance difference between TCP connections to:
localhost / 127.0.0.1
a domain which resolves to the local machine
Or more specifically, do the latter connections go through the loopback device, or over the actual network?
The reason I'm asking is I'm thinking about changing database settings in many PHP apps so they use a full domain instead of localhost. That way we could more easily move the database to a different server, if the need arises.
This is implementation and operating system dependent. On Windows, anything connecting to a local IP address, even if it is an outside-facing IP, will go over loopback. This is a documented problem for applications such as packet sniffers, because you can't sniff the loopback. (Windows doesn't treat loopback as a "device" -- it is handled at the network level.) However, in this case it would work in your favor.
Linux, in contrast, will follow whatever you have in your routing table, so packets that are destined to your local machine will go to your local machine over the network if the routing table isn't properly configured. However, in 99% of the cases the routing will be configured properly. Your packets won't go over the loopback device, but the TCP/IP stack will know that you are contacting a local IP and it will virtually go out and back in the proper ethernet device.
In a properly configured environment, the only bottleneck for using a domain name would be DNS resolution time. Contacting an outside DNS can add additional latency into your configuration. However, if you add in the domain name into your /etc/hosts file (C:\Windows\System32\drivers\etc\hosts on Windows), your system will skip the DNS resolution phase and obtain an IP directly, making this time cost moot.
That depends on how the names are resolved. The procedure is typically /etc/hosts first and then DNS if that fails. If localhost is in your /etc/hosts, putting whatever.wherever in the file as well will make it resolve with the same speed.