Having seen https://groups.google.com/forum/#!topic/membase/MvyK0d2IFa8 (which is quite old). I am faced with similar situation.
How can I configure couchbase to use physical network interface A (1.1.1.1) for all erlang cluster communications and network interface B (2.2.2.2) for HTTP and other client server communications including 8091 and management ports.
Hopping to find a better way than messing around with routing table.
TIA
Related
I need to access a postgres database from my java code which resides in openshift cluster. I need a way to do so. without initiating port forwarding manually through oc port forward command.
I have tried using openshift java client class openshift connection factory to get the connection by passing server url and username password through which I log in to the console but it dint help.
(This is mostly just a more detailed version of Will Gordon's comment, so credit to him.)
It sounds like you are trying to expose a service (specifically Postgres) outside of your cluster. This is very common.
However the best method to do so does depend a bit on your physical infrastructure because we are by definition trying to integrate with your networking. Look at the docs for Getting Traffic into your Cluster. Routes are probably not what you want, because Postgres is a TCP protocol. But one of the other options in that chapter (Load Balancer, External IP, or NodePort) is probably your best option depending on your networking infrastructure and needs.
I am planning to run an SSIS ETL job , which has a sql server as SOURCE db , this is on a physical on-premise machine and the DESTINATION db (postegres/patroni) is running on Openshift platform as pod/containers. The issue I am facing now is like, DB hosted on openshift cannot be exposed via tcp port. As per few articles online, openshift only allows HTTP traffic via “routes”. Is this assumption right? If yes, how in real world people run ETL or bulk data transfer or migration to a db on openshift from outside. I am worried to use HTTP since I feel , it’s not efficient for ETL. Few folks mentioned like, use OC PORT FORWARDING. But for a production app, how an open shift port forwarding be stable? Please throw your comments
In a production environment it is a little questionable if you want to expose your database to the public internet. Normally you probably rather want to go with a site-to-site VPN.
That left aside it is correct that OCP is using routes for most use cases, which are then exposing an http(s) endpoint. If you need plain TCP however, you can create a service of type loadbalancer.
The regular setup with a route is stacked like
route --> service --> pods where the service is commonly of type clusterIP.
with a service of type loadbalancer, you eliminate the route and directly expose a TCP service.
If you run on a public cloud, OCP takes care of the leftover requirements for you. Namely that is to create a Loadbalancer with your cloudprovider. In the case of AWS for example, OCP would create an ELB (Elastic Loadbalancer) for you.
You can find more information in the documentation
I made a simple Instant Message Chat Client and Server on TCP, that both run off Adobe AIR. It works great and it was a interesting way to learn basic networking programming.
My Question: Is it possible to change the data in the packet sent from the Chat Server before it arrives at the Client without using the Server or Client to do so? Like perhaps a program?
I am new to Network programming so I apologize if this is a dumb question.
Your question is very broad. So the answer is broad as well. Yes. It's possible.
For that you need to get the packets between the client and server to pass through a third program. There are quite a lot of ways to achieve that. Here's non-exhaustive list:
First, on your own machines (client/server) you could get access to the packet from the operating system using various low-level APIs. For instance iptables+nfqueue in Linux or the Windows Filtering Platform on Windows.
Second, you could get access to the packets by intentionally having them communicate through some proxy program which may or may not reside on the same server as the client or the server.
Third, you could get access to the packets by picking them up from the network itself. For instance, you could set up some Linux machine as a router and have it sit between the client and the server (as long as they're not on the same machine). That Linux machine will now have access to all of the packets that pass through it, and it can pass them to various user-space programs using hooks such as the previously mentioned nfqueue.
I have two openshift routers, running as pods, running in OSE.
However, I don't see any associated services in my openshift cluster which forwards traffic / loadbalances to them.
Should I expose my routers to the external world in a normal OSE environment?
Note that this is in a running openshift (OSE) cluster, so I do not think it would be appropriate to recreate the routers with new service accounts, and even if I did want to do this, it isn't always gauranteed that I will have access inside of OpenShift to do so.
If you are talking about the haproxy routers which are a part of the OpenShift platform, and which handle routing of external HTTP/HTTPS requests through to the pods of an application which has been exposed using a route, then no, you should not at least expose then as an OpenShift Route. Adding a Route for them would be circular as the router is what implements the route.
The incoming port of the haproxy routers does need to be exposed outside of the cluster, but this should have been handled as part of the setup you did when the OpenShift cluster was installed. Exactly what you may needed to have done to prepare for that when installing the OpenShift cluster depends on your target system into which OpenShift was installed.
It may be better to step back and explain the problem you are having. If it is an installation issue, you may be better asking on one of the lists at:
https://lists.openshift.redhat.com/openshiftmm/listinfo
as that is more frequented by people more familiar with installing OpenShift.
We have developed a client app and a server app. The client communicates with the server using the http protocol and sends some data to be processed by the server.
Our structure allow us to have the server installed anywhere. I can be on the same client network or even on the cloud.
When the server is hosted on the cloud, it makes sense asking the user for the server address (since it can change if the user wishes to) but it does not make sense when the server is on the same network that the client. Besides that, we are currently asking users to configure the server ip/name in order to connect to the server.
To avoid this (asking users for the address) I have developed a discovery service based on UDP. The client broadcasts a message that the server answer with its address. It does work on some cases, but it does not when the user has some kind of firewall, proxy or even an anti virus.
I have read a lot about discovery services, and the one that a like most is Bonjour.
So, the question is: what is the best way of discovering a server's IP when the server is on the same network that the client without being blocked by firewalls, proxies, etc?
You can keep your service purely local (in the intranet) and build on top of what you are using now by implementing hole punching. You can get past firewalls, but Im really not sure about AV software policies.
Or you can establish a well-known http-based discovery service in the internet.
A server comes alive, sends its (local) ip address to the discovery service (keeps sending keep-alives)
On startup, the client queries that discovery service, identifies the local subnet he is in, and gets back the local ip address of the server.
That of course creates a single point of failure in your system in that if the discovery service kicks the bucket, your clients cannot find servers. You can remedy that by replicating the service and/or introducing fallback mechanisms (like the purely local discovery you have), which you probably want to do anyway. The only problem you might have is the subnet identification, if computers in local subnets dont share external IP addresses (then it depends on what a local subnet is for you).