Whitelist mysql host from kubernetes - mysql

I'm currently trying to build my services on kubernetes using istio and have trouble trying to whitelist all host IPs that are allowed to connect to the Mysql database through mysql.user table.
I always get the following error after a new deployment:
Host 'X.X.X.X' is not allowed to connect to this MySQL server
Knowing that every time i deploy my service a new pod IP always pops out and i have to add replace the old user with the new host IP. I would really like to avoid using '%' for the host.
Is there any way how i could just register the node IP instead to keep its persistence?

Both Kubernetes and Istio provide network-level protections and setting the allowed hosts to "all" is safe.
A Kubernetes network policy is probably the best cluster-level match for what you're looking for. You'd set the database itself to accept connections from all addresses, but then would set a network policy to refuse connections except from pods that have a specific set of labels. Since you control this by label, any new pods that have the appropriate set of labels will be automatically granted access without manual changes.
Depending on your needs, the default protection given by a ClusterIP service may be enough for you. If a service is ClusterIP but not any other type, it is unreachable from outside the cluster; there is no network path to make it accessible. This is often enough to prevent casual network snoopers from finding your database.
Istio's authorization system is a little bit more powerful and robust at a network level. It can limit calls by the Kubernetes service account of the caller, and uses TLS certificates rather than just IP addresses to identify the caller. However, it doesn't come enabled by default, and in my limited experience with it it's very easy to accidentally configure it to do things like block Kubernetes health checks or Prometheus metric probes. If you're satisfied with IP-level security this might be more power than you need.

Related

How can I port forward in openshift without using oc client . Is there a way we can usejava client to portforward in a pod just like“oc port forward”

I need to access a postgres database from my java code which resides in openshift cluster. I need a way to do so. without initiating port forwarding manually through oc port forward command.
I have tried using openshift java client class openshift connection factory to get the connection by passing server url and username password through which I log in to the console but it dint help.
(This is mostly just a more detailed version of Will Gordon's comment, so credit to him.)
It sounds like you are trying to expose a service (specifically Postgres) outside of your cluster. This is very common.
However the best method to do so does depend a bit on your physical infrastructure because we are by definition trying to integrate with your networking. Look at the docs for Getting Traffic into your Cluster. Routes are probably not what you want, because Postgres is a TCP protocol. But one of the other options in that chapter (Load Balancer, External IP, or NodePort) is probably your best option depending on your networking infrastructure and needs.

Confusion about kubernetes Networking with external world

I have a confusion. When i try to access the services like mysql that are externally hosted or outside the cluster, what will be the source address of the packet that are sent to mysql. To make it simple while creating user in mysql for the api to access it, how do i create it?
For example:
CREATE USER 'newuser'#'IP or HOSTNAME' IDENTIFIED BY 'user_password';
What should be the IP? the pod IP or the host IP?
Is there any way through which if the pod is spawn in any node but it can authenticate against mysql?
Thank You
When accessing services outside the kubernetes cluster the source IP will be the regular IP of the node, the application is running on.
So there is no difference if you run the application directly on the node ("metal") or inside a container.
Kubernetes will select an appropriate node to schedule the container (pod) to, so this might change during the lifetime.
If you want to increase the security you should investigate TLS with mutual authentication in addition to the password. The source IP is not the best course of action for dynamic environments like cloud or kubernetes.

Which URL/IP to use, when accessing Kubernetes Nodes on Rancher?

I am trying to expose services to the world outside the rancher clusters.
Api1.mydomain.com, api2.mydomain.com, and so on should be accessible.
Inside rancher we have several clusters. I try to use one cluster specifically. It's spanning 3 nodes node1cluster1, node2cluster1 and node2cluster1.
I have added ingress inside the rancher cluster, to forward service requests for api1.mydomain.com to a specific workload.
On our DNS I entered the api1.mydomain.com to be forwarded, but it didn't work yet.
Which IP URL should I use to enter in the DNS? Should it be rancher.mydomain.com, where the web gui of rancher runs? Should it be a single node of the cluster that had the ingress (Node1cluster1)?
Both these options seem not ideal. What is the correct way to do this?
I am looking for a solution that exposes a full url to the outside world. (Exposing ports is not an option as the companies dns cant forward to them.)
Simple answer based on the inputs provided: Create a DNS entry with the IP address of Node1cluster1.
I am not sure how you had installed the ingress controller, but by default, it's deployed as "DaemonSet". So you can either use any one of the IP addresses of the cluster nodes or all the IP addresses of the cluster nodes. (Don't expect DNS to load balance though).
The other option is to have a load balancer in front with all the node IP addresses configured to actually distribute the traffic.
Another strategy that I have seen is to have a handful of nodes dedicated to run Ingress by use of taints/tolerations and not use them for scheduling regular workloads.

Accessing MySQL running on Docker Swarm

I have what is going to be a production MySQL database, and we want to access such remotely but haven't found a secure way to do it.
Docker Swarm do not have support for host bound ports such as 127.0.0.1:3303:3303, however normal mode does. Making a port public becomes also public in all swarm nodes.
Using firewalls is not really an option since we would have to configure these on every single node in the swarm.
We have on table only two options
Opening the port and only allowing connections through TLS and enforcing REQUIRE options Issuer and Subject, to only one single user and probably read_only. Still seems to be highly insecure due to having the open port.
Creating a temporary dockerized sshd service and making it available in the same network as MySQL service, it is more hazzle to manage these ssh containers. Still more secure since it would be turn on/off when needed
Question: Is there any other/better options to approach this? and how badly insecure is it to have open port + tls connections?
If you have a good argument against accessing MySQL remotely I would appreciate it

Google Compute Engine - How to allow access from (only) other project instances?

With Google Compute Engine, how do I create a firewall rule so that only instances within the same project are allowed access? Access from other clusters (within same project) should be allowed.
The scenario is to allow a GKE cluster to access a cluster of RethinkDB database servers that run on GCE instances.
"So that only instances within the same project are allowed access" to what?
I assume you don't mean access to the cluster's apiserver, since that IP should already be accessible from all your instances.
If you mean accessing a container in a cluster from an instance outside the cluster, you can create a firewall rule to be more permissive about allowing traffic within your GCE network. You can either be very permissive or a little more fine-grained when doing this:
Very permissive - just create a firewall rule that allows traffic from the source IP range 10.0.0.0/8 to all instances in your network (don't add any "target tags") on all the protocols and ports your care about (e.g. tcp:1-65535,udp:1-65535,icmp). The 10.0.0.0/8 range will cover all instances and containers in your network (and nothing outside of it).
Separate firewall per cluster - do the same thing as number one, but add the target tag that's on all nodes in the cluster. You can get this from looking at one of the instances' tags or by looking at the target tags on the firewalls that GKE created for your cluster when it was created. The benefit of this approach is that it will let everything in your network talk to your cluster without exposing anything else in your network that you don't want to open up quite so much.
If you mean accessing a service from outside the cluster, then it's a little tougher since you need to run the kube-proxy on the instances outside the cluster and configure it to talk to the cluster's apiserver in order to route the service packets properly.
Turns out the problem was that I was accessing the RethinkDB instances via external IPs. For some reason, this causes the firewall rule with internal source IPs not to match. The solution was to access the instances via internal DNS names instead, in which case the firewall rule applies.
Furthermore, there is a default firewall rule already, default-allow-internal, which allows any traffic between instances on the same project. Therefore I do not need to create my own rule.