SnappyData v.0.5
In our AWS SnappyData instance, we have the following attributes:
public IP: 52.x.x.x (exposed to the Internet)
private/internal IP: 172.x.x.x (exposed only inside AWS)
private/internal Name: ip-172-x-x-x.us-west-2.compute.internal (exposed only inside AWS)
To connect with JDBC from my Windows client, I use a JDBC URL like this:
jdbc:snappydata://52.x.x.x:1527/
The sequence of events the connection makes is:
JDBC client connects into AWS and reaches the Locator at 172.x.x.x:1527
Locator finds the Server running at 172.x.x.x:somePort
Locator sends the internal host name back to the Windows Client
Windows JDBC Client tries to connect to this:
ip-172-x-x-x.us-west-2.compute.internal
The JDBC connection fails because only the 52.x.x.x IP address is really publicly available to the Internet.
To remedy, I had to change my Windows hosts file, adding a mapping of:
52.x.x.x ip-172-x-x-x.us-west-2.compute.internal
Please advise on a better way, so my clients don't need to hack their 'hosts' files.
You can set the "prefer-netserver-ipaddress" property on servers to force sending the IP addresses back to clients e.g. -prefer-netserver-ipaddress=... or -J-Dgemfirexd.prefer-netserver-ipaddress=... on command-line (see details here)
The default is to convert to host names because in most scenarios it is the IP address that does the job of catering to both internal IP and external (i.e. if looked up from inside the subnet then it resolves to internal IP address but from outside it resolves to external IP address).
Related
Is there a way to proxy/ port forward GCP Cloud SQL so that we can connect to it via the internet?
I don't want to do an SSH port forward via a Virtual Machine. Instead, I'm looking for a way such that we could connect to CloudSQL from a public IP of either a Virtual Machine or a Kubernetes service.
I don't want to connect directly from the public IP of the CloudSQL instance as it requires us to whitelist the user's IP address. We have also tried the Cloud SQL proxy but faced speed and performance issues.
Hence, now I'm looking for a solution to proxy the CloudSQL connection from a VM or Kubernetes service
I have tried using Stunnel to proxy the connection as described in this documentation.
output=/tmp/stunnel.log
CAfile=/tmp/mysql-server-ca.pem
client=yes
pid=/var/run/stunnel.pid
verifyChain=yes
sslVersion=TLSv1.2
[mysqls]
accept=0.0.0.0:3307
connect=private-ip:3306
But, I get an error while connecting to the MySQL server:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 104
Edit:
Stunnel runs on a Virtual Machine on Google Cloud
Stunnel connects to CloudSQL via Private IP (Both VM and CloudSQL share the same subnet)
MySQL can be connected from the VM using the private IP
Stunnel Logs:
2022.09.22 10:53:17 LOG5[2]: Service [mysqls] accepted connection from 127.0.0.1:37014
2022.09.22 10:53:17 LOG5[2]: s_connect: connected <mysql-private-ip>:3306
2022.09.22 10:53:17 LOG5[2]: Service [mysqls] connected remote server from 10.128.0.53:53302
2022.09.22 10:53:17 LOG3[2]: SSL_connect: ../ssl/record/ssl3_record.c:331: error:1408F10B:SSL routines:ssl3_get_record:wrong version number
2022.09.22 10:53:17 LOG5[2]: Connection reset: 0 byte(s) sent to TLS, 0 byte(s) sent to socket
To access a Cloud SQL from a Compute Engine VM try the following, you can use either the Cloud SQL Auth proxy (with public or private IP), or connect directly using a private IP address
From the client machine or Compute Engine VM instance, use What's my
IP to see the IP address of the client machine.
Copy that IP address. In the Google Cloud console, go to the Cloud
SQL Instances page.
Go to Cloud SQL Instances
To open the Overview page of an instance, click the instance name.
Select Connections from the SQL navigation menu.
In the Authorized networks section, click Add network and enter the
IP address of the machine where the client is installed. Note: The IP
address of the instance and the MySQL client IP address you authorize
must be the same IP version: either IPv4 or IPv6
Click Done. Then click Save at the bottom of the page to save your
changes. Connect to your instance, either with SSL or without SSL.
To access a Cloud SQL instance from an application running in Google Kubernetes Engine, you can use either the Cloud SQL Auth proxy (with public or private IP), or connect directly using a private IP address. To connect to Cloud SQL you must have:
A GKE cluster, with the kubectl command-line tool installed and
configured to communicate with the cluster. For help getting started
with GKE, see the Quickstart.
Check the document for steps on how to configure without SSL
For Public IP-configured instances, a public-facing IPv4 address may
be enabled, allowing users outside the GCP project and VPC network to
connect to the instance.
Check the similar example here.
I am using Tibco ComputeDB, which is new to me. It uses sparkDB and snappyData. I can start both Spark and SnappyData and connect to snappydata using command connect client '127.0.0.1:1527' or with internal IP of aws server. But when I try to connect it with aws external IP using above command it do not work. Also I am not able to connect to snappyData from client like sql workbench/J. I have all required drives installed on local machine and server and also all ports are open on aws server. I can access dashboard using http://externalip:5050.
I also edited conf/locators and conf/servers file as explained in below link and also hostfile entry seems fine.
=> https://snappydatainc.github.io/snappydata/howto/connect_to_the_cluster_from_external_clients/
Lines were as below
=> "Private IP" -client-bind-address="Private IP" -hostname-for-clients="Public IP"
=> "Private IP" -client-bind-address="Private IP" -client-port=1555 -hostname-for-clients="Public IP"
I follow below document to connect with JDBC.
=> https://snappydatainc.github.io/snappydata/howto/connect_using_jdbc_driver/
But still not able to connect with external IP.
=> connect client 'externalIP:1527'; should work before I can connect to snappydata from any client using external IP?
Can someone guide that what setting should be made to connect snappydata from aws external IP and with any sql client.
Are the ports open for the public IP of the AWS instance itself? Right now that is needed if you connect using the public IP, even if you are trying to connect from the same AWS instance via connect client ... command.
Thumb rule is that ports (e.g. 1527-1528) must be open for the client's IP in the security group. So, if the client is in the same AWS instance, then ports must be open for it's public IP.
If this doesn't help, can you paste the content of files locators, servers and leads which are present under conf/ directory? You can remove/strike sensitive information in them, if any.
Also please paste the error messages you see.
We have refined the steps to set up the cluster on AWS here that could clear a few things: https://snappydatainc.github.io/snappydata/install/setting_up_cluster_on_amazon_web_services/#usingawsmgmtconsole
I am using a software - (Ingress) by FingerTec which uses mysql database.
Some setups of this system are only using a single installation - consisting of a mysql server and a client locally on the same machine.
I have been having issues since I started to use the software when it is installed on a user's laptop/PC. The problem is that frequently when running the mysql server and client, a window pops up asking for the local IP address and port (127.0.0.1 and 3306 by default). To continue using the software, one needs to run IngressDB installer where you need to 'Update Connection' by giving the root user and pass for mysql and then 'Upgrade Database' to refresh the database for any new settings. After this step the software runs fine.
Yesterday I managed to simulate this issue by changing the static IP on my laptop while connected directly to one of their Access Controllers. I had to re-Run Ingress DBinstaller.
Now my question is this:
When using your machine(laptop/pc) it is normally getting IP add, def GW, Subnet etc from a dhcp server therefore there is no guarantee that you will always get the same IP leased unless there is a reservation to the machine's mac address.
As described earlier - when ever there is a change of IP address leased from DHCP, a window pops up showing the loopback address 127.0.0.1 and the mysql port 3306. So it never shows the local IP address (ex. 192.168.1.100). So I was thinking - why is the loopback IP not enough for mysql client/server as this stays the same forever.
Is is normal that software using mysql database server requires a static local IP on the machine hosting it? I am referring only to instances where both mysql server and client reside on the same machine.
I appreciate your thoughts about this and maybe any other way I can get around this apart from making an IP address reservation in the DHCP server. Setting a static IP address manually on the LAN adapter is no solution for me as this would limit the machine to connect only to a certain network and cannot be used at other places.
If the client is the same local machine as the server, the MYSQL server specifically does not need a static IP because it pretty much already has one: 'localhost' or '127.0.0.1'. If the client is not on the same machine as the server, the server would need a static IP.
If the machine is acting as a server for other content, yes, it would need a static IP. If you're doing this at home, chances are that your access point will let you configure it for a static IP.
Im trying to connect to a postgres database, from a springboot application deployed in minishift.
The postgres server is running on the same host that minishift is running on.
I've tried setting the postgres serve to listen on a specific IP address, and use this same address in the springboot jdbc connection url but I still get org.postgresql.util.PSQLException: Connection to 172.99.0.1:5432 refused
I've also tried using 10.0.2.2
Also tried, in /etc/postgresql/9.5/main/postgresql.conf, setting:
listen_addresses = '*'
How can I connect to a database external to minishift, running on same host?
Besides the answer referenced in my comment, which suggests to make your database listen on the IP address of the Docker bridge, you could make your pod use the network stack of your host. This way you could reach Postgres on the loopback. This works only if can guarantee that the pod will always run on the same host as the database.
The Kubernetes documentation discourages using hostNetwork. If you understand the consequences you can enable it as in this example.
If a pod inside kubernetes can't see the IP address from the host then I guess its an underlying firewall or networking issue. Try opening a shell inside the pod...
kubectl exec -it mypodname bash
Then trying to ping, telnet, curl, wget or whatever to see if you can see the IP address.
It sounds like something's wrong with the networking setup of your minishift. It might be worth raising an issue with minishift: https://github.com/minishift/minishift/issues/new
If you can find an IP address on the host which is accessible from a docker pod you can create a Kubernetes Service and then an Endpoint for the service with the IP address of the database on your host; then you can use the usual DNS discovery of kubernetes services (i.e. using the service name as the DNS name) which will then resolve to the IP address. Over time you could have multiple IP addresses for failover etc.
See: https://kubernetes.io/docs/user-guide/services/#without-selectors
Then you can use Services to talk to all your actual network endpoints with your application code completely decoupled on if the endpoints are implemented inside kubernetes, outside with load balancing baked in!
I have deployed a hadoop cluster on google compute engine. I then run a machine learning algorithm (Cloudera's Oryx) on the master node of the hadoop cluster. The output of this algorithm is accessed via an HTTP REST API. Thus I need to access the output either by a web browser, or via REST commands. However, I cannot resolve the address for the output of the master node which takes the form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091.
I have allowed http traffic and allowed access to ports 80 and 8091 on the network. But I cannot resolve the address given. Note this http address is NOT the IP address of the master node instance.
I have followed along with examples for accessing IP addresses of compute instances. However, I cannot find examples of accessing a single node of a hadoop cluster on GCE, that follows this form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091. Any help would be appreciated. Thank you.
The reason you're seeing this is that the "HOSTNAME.c.PROJECT.internal" name is only resolvable from within the GCE network of that same instance itself; these domain names are not globally visible. So, if you were to SSH into your master node first, and then try to curl http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 then you should successfully retrieve the contents, whereas trying to access from your personal browser will just fail to resolve that hostname into any IP address.
So unfortunately, the quickest way for you to retrieve those contents is indeed to use the external IP address of your GCE instance. If you've already opened port 8091 on the network, simply use gcutil getinstance CLUSTER_NAME-m and look for the entry specifying external IP address; then plug that in as your URL: http://[external ip address]:8091.
If you turned up the cluster using bdutil, a more involved but nicer way to access your cluster is by running the bdutil socksproxy command. This opens a dynamic-port-forwarding SSH tunnel to your master node as a SOCKS5 proxy, so that you can then configure your browser to use localhost:1080 as your proxy server, make sure to enable remote DNS resolution, and then visit your browser using the normal http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 URL.