Connect to host postgres db from minishift - openshift

Im trying to connect to a postgres database, from a springboot application deployed in minishift.
The postgres server is running on the same host that minishift is running on.
I've tried setting the postgres serve to listen on a specific IP address, and use this same address in the springboot jdbc connection url but I still get org.postgresql.util.PSQLException: Connection to 172.99.0.1:5432 refused
I've also tried using 10.0.2.2
Also tried, in /etc/postgresql/9.5/main/postgresql.conf, setting:
listen_addresses = '*'
How can I connect to a database external to minishift, running on same host?

Besides the answer referenced in my comment, which suggests to make your database listen on the IP address of the Docker bridge, you could make your pod use the network stack of your host. This way you could reach Postgres on the loopback. This works only if can guarantee that the pod will always run on the same host as the database.
The Kubernetes documentation discourages using hostNetwork. If you understand the consequences you can enable it as in this example.

If a pod inside kubernetes can't see the IP address from the host then I guess its an underlying firewall or networking issue. Try opening a shell inside the pod...
kubectl exec -it mypodname bash
Then trying to ping, telnet, curl, wget or whatever to see if you can see the IP address.
It sounds like something's wrong with the networking setup of your minishift. It might be worth raising an issue with minishift: https://github.com/minishift/minishift/issues/new
If you can find an IP address on the host which is accessible from a docker pod you can create a Kubernetes Service and then an Endpoint for the service with the IP address of the database on your host; then you can use the usual DNS discovery of kubernetes services (i.e. using the service name as the DNS name) which will then resolve to the IP address. Over time you could have multiple IP addresses for failover etc.
See: https://kubernetes.io/docs/user-guide/services/#without-selectors
Then you can use Services to talk to all your actual network endpoints with your application code completely decoupled on if the endpoints are implemented inside kubernetes, outside with load balancing baked in!

Related

How can I port forward in openshift without using oc client . Is there a way we can usejava client to portforward in a pod just like“oc port forward”

I need to access a postgres database from my java code which resides in openshift cluster. I need a way to do so. without initiating port forwarding manually through oc port forward command.
I have tried using openshift java client class openshift connection factory to get the connection by passing server url and username password through which I log in to the console but it dint help.
(This is mostly just a more detailed version of Will Gordon's comment, so credit to him.)
It sounds like you are trying to expose a service (specifically Postgres) outside of your cluster. This is very common.
However the best method to do so does depend a bit on your physical infrastructure because we are by definition trying to integrate with your networking. Look at the docs for Getting Traffic into your Cluster. Routes are probably not what you want, because Postgres is a TCP protocol. But one of the other options in that chapter (Load Balancer, External IP, or NodePort) is probably your best option depending on your networking infrastructure and needs.

Connection refused from serverless-offline lambda to host database

This question is related to serverless-offline plugin, local mysql database connection. The scenarios for my test is as follows.
Using serverless-offline plugin, a lambda function is deployed locally on my machine.
The triggered lambda is not possible to connect with the local database.
Probably, serverless-offline creates a docker image to launch a lambda, and the address is not correct in the docker container and port mapping. However, serverless-offline does not support those docker options. I am stuck here to connect the database from the lambdas deployed locally with serverless-offline.
I used localhost:3306 for the db host, but it does not work. I tried port forwarding to connect the database via public ip address which does not work.
The database connection can be established somehow, but the connection is refused all the time. Any help?
I'll do my best to address several areas of your post in order of their appearance
serverless-offline creates a docker image to launch a lambda
Incorrect. Serverless Framework and its plugins (serverless-offline, etc.) have absolutely nothing to do with Docker, or Docker related technologies.
I used localhost:3306 for the db host, but it does not work
From your post, I am gathering that you simply do not have a MySQL service running on your local machine. Is that what you need? Reply to this post and I'll try to help, or simply google examples of how to install/start/configure a MySQL server.
I tried port forwarding to connect the database via public ip address which does not work.
I assume you're talking about the popular ssh -L trick to connect to a remote database over SSH connection? From your post, I am gathering that you simply are not performing this operation correctly. Do you need help doing that? Reply to this post and I'll try to help, or simply google examples of how to use SSH Port Forwarding to connect to a MySQL database.

How to connect mysql-client to my spring boot app

I have jar file of springboot and I'm running on compute engineVM
And I also connect SQL-client but what address of mysql should I give in spring boot
I assume you are using GCP's hosted mysql? (Cloud SQL).
If so, then if you are connecting to it via cloud sql proxy, which is running on the same machine, then you just use localhost. The proxy should know the way to the server from there, assuming that you've configured the instance name and project/etc. correctly.
Otherwise, without the proxy, you can use your SQL instance's public IP address, which you can see on the list of running instances when you select the SQL page.
In the second case (using the actual IP address) keep in mind that GCP probably wont let the VM running your application through the firewall to the SQl instance directly. To work around this, you'd have to list your VM's IP address in the Authorized Networks section of the SQL entry (click on your SQL instance in the list and select the Authorization tab). Again, in this case, you need to keep in mind that your VM's IP address is ephemeral by default (unless you made and effort to make it permanent). So if you restart your VM, the above Authorization will no longer make sense. So make sure you make your VM's IP address permanent.

Hadoop cluster on Google Compute Engine: Accessing master node via REST

I have deployed a hadoop cluster on google compute engine. I then run a machine learning algorithm (Cloudera's Oryx) on the master node of the hadoop cluster. The output of this algorithm is accessed via an HTTP REST API. Thus I need to access the output either by a web browser, or via REST commands. However, I cannot resolve the address for the output of the master node which takes the form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091.
I have allowed http traffic and allowed access to ports 80 and 8091 on the network. But I cannot resolve the address given. Note this http address is NOT the IP address of the master node instance.
I have followed along with examples for accessing IP addresses of compute instances. However, I cannot find examples of accessing a single node of a hadoop cluster on GCE, that follows this form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091. Any help would be appreciated. Thank you.
The reason you're seeing this is that the "HOSTNAME.c.PROJECT.internal" name is only resolvable from within the GCE network of that same instance itself; these domain names are not globally visible. So, if you were to SSH into your master node first, and then try to curl http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 then you should successfully retrieve the contents, whereas trying to access from your personal browser will just fail to resolve that hostname into any IP address.
So unfortunately, the quickest way for you to retrieve those contents is indeed to use the external IP address of your GCE instance. If you've already opened port 8091 on the network, simply use gcutil getinstance CLUSTER_NAME-m and look for the entry specifying external IP address; then plug that in as your URL: http://[external ip address]:8091.
If you turned up the cluster using bdutil, a more involved but nicer way to access your cluster is by running the bdutil socksproxy command. This opens a dynamic-port-forwarding SSH tunnel to your master node as a SOCKS5 proxy, so that you can then configure your browser to use localhost:1080 as your proxy server, make sure to enable remote DNS resolution, and then visit your browser using the normal http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 URL.

Heroku Node.js Remote Mysql Database IP Address

I have a remote Mysql database that I am connecting to through Node.js on Heroku. My MySql host (bluehost) wants me to input IP Adresses of all remote MySql connections.
Heroku doesn't have a dedicated IP for my app, so how can I connect to it? Bluehost mentions something about a Class C IP on its page, but I'm not sure Heroku has one...
Also, I believe I already have all of the heroku environment variables set up correctly:
(heroku config:add EXTERNAL_DATABASE_URL=...)
Thanks :D
Here's what blue host says about dynamic ip addresses:
Dynamic IP Addresses
Having a dynamic IP address means that the connecting IP address can
change periodically depending on the Internet Service Provider (ISP).
You must update the connecting IP in Remote MySQL every time it
changes.
from https://my.bluehost.com/cgi/help/89.
So at least each time you redeploy your application, you have a chance to get a different ip address. So this seems highly impractical. Why don't you use Heroku's MySQL offering?
You can use one of 'static ip' addons and proxy connection via that static ip - see this discussion