Private PaaS v/s Public PaaS v/s Self-managed Private PaaS - openshift

I have been trying understand the difference between Private PaaS v/s Public PaaS v/s Self-managed Private PaaS.
My understanding till now is that (and please correct me if I am wrong) a private PaaS is deployed on-premises while a public PaaS is deployed on the premises of vendor.
Below are my questions:
A public PaaS like Google Anthos can be deployed on-premise as well, and if that happen then does it becomes private PaaS?
What is self-managed private PaaS, and what are the
examples and how it is different from "private PaaS"?
Is Red Hat's OpenShift an example of self-managed private
PaaS?

A public PaaS like Google Anthos can be deployed on-premise as well, and if that happen then does it becomes private PaaS?
You can use GKE On-premisses to integrate your bare-metal service as GKE nodes, but the master nodes will remain in the Cloud. Here you can read more about GKE On-Premises.
What is self-managed private PaaS, and what are the examples and how it is different from "private PaaS"?
A private PaaS means you have all installation done and managed by you and not by the cloud provider. Some examples are RedHat Openshift and Rancher cluster. Bot can can be installed in the cloud but managed by you.
Is Red Hat's OpenShift an example of self-managed private PaaS?
Yes, it is. You can use some cloud provider to install, but still being a private PaaS.

Related

Azure MySQL Flexible Server and Read Replica Connection Path

My understanding is that if i wanted to setup a read replica for a non-flexible Azure MySQL server that is connected to a virtual network using a private link, any replication will be done "over the open internet" rather then through the address exposed by the private link as private links are a "one way street" so to speak. The source IP accoridng to the replica will be that of the public IP address of the mysql server within azure (regardless if you had provided public access to it or not).
My situation on the other hand is with an Azure MySQL Flexible Server deployed in my tenant with private vnet integration. As i understand this is different then private links. I am trying to understand if replication happens the same way via some hidden public ip address or if the replication source from the private ip address assigned to the flexible server within the vnet that is deployed that i can then apply normal vnet routing rules to send it.. wherever i want be it through a private firewall or perhaps to a replication source over a VPN.
See my diagram below - im pretty confidant my understanding of the diagram on the left is correct, am trying to understand which path the replication for the flexible server will take here, via some hidden public ip, or be routed properly through the firewall?

The peering connection cloudsql-mysql-googleapis-com is not established when a Cloud SQL instance is configured to use Private IP

I have created a private Cloud SQL instance in an app project. The network used is a shared VPC and it is hosted in a network project.
In the shared VPC:
The private access connection is enabled
An automatic internal IP range has been allocated for private connection
A private connection has been created
If I go to the VPC Network > VPC Network Peering page, I don't see a peering connection named cloudsql-mysql-googleapis-com. Therefore, I cannot connect to my cloud SQL instance using its private IP address. I can only reach the cloud SQL instance using its public IP address.
The same infrastructure works for the development environment, I use terraform to generate the GCP resources. The two environments have exactly the same configuration.
Source code: https://gitlab.com/Chabane87/cloudsql-issue
Does anyone know when this problem can happen?
Thanks
Based on the discussion about this issue on our another support channel, it seems connectivity tests were run to zero in on the problem. While the connection from one of your instances to Cloud SQL succeeded using public IP, it failed when using private IP but that is the intended behaviour.
The Telnet test was conducted later using live traffic from the instance to Cloud SQL and found that a port is missing in the production environment while it is defined correctly in the development environment and hence it is confirmed there is no issue with the Networking. So, please try to connect to the Cloud SQL after adding the missing port to the prod project.

Is it possible to ssh into the IKS worker nodes deployed on vpc2 infa?

If a K8s cluster is deployed on IBM VPC2 infrastructure then Is it possible to ssh into the worker nodes? I have enabled Public Gateway but I'm not sure if I can do the ssh using the public IP mentioned in the public gateway?
Also Is it possible to assign a public IP to every worker node? Like a floating IP for every worker node?
If you're using the managed offering of IBM Cloud Kubernetes Service (IKS) or Red Hat OpenShift (ROKS), then SSH access is disabled by default.
Public Gateways enabled on the VPC Subnets of your worker nodes are for establishing outbound connections to the internet from the nodes contained within that subnet.
https://cloud.ibm.com/docs/containers?topic=containers-vpc-subnets#vpc_basics_pgw
Likewise, if you're using either of the managed offerings mentioned above, you have no access to the actual VPC VSI worker nodes through your VPC infrastructure, and can not assign public IP's to them.
You can however enable a public service endpoint during VPC cluster creation that will allow services unable to communicate over the Private Service Endpoint to still work over the Public Service Endpoint.

Connect to GCE Cloudsql instance via private IP

I am currently setting up my first GCE kubernetes cluster, having previously used mainly AWS for this.
Cluster is up and running and can access a local NFS server on the same compute engine VPC via private IP, so one stage of private network connection is fine.
Cloudsql server is running and can access this fine from the cluster if I open up public ip to the world.
Have enabled private ip address on the cloudsql which looks good, but I cannot ping or connect from the same container that can reach the public ip.
Cloudsql private ip is a different subnet which I believe is to be expected.
Checked VPC Network peering and got a relevant looking rule.
Checked VPC routes and got the matching peering route with next hop.
I have seen in docs that private ip is still beta, so guess potential to be a glitch beyond my control.
Also read up on running a proxy container inside each pod - hesitant to do this unless only option, app may end up across platforms so would prefer more standard config.
There's currently a requirement that the GKE cluster must be created with "VPC-Native" networking in order to access Cloud SQL via private IP. Unfortunately you need to re-create the cluster in order to make it VPC-Native.
https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips

Putting private information on Public PaaS?

If I put my private information into any Public PaaS (I'm currently using OpenShift environment), would it be open to Public? or to employees of the company? I fail to understand how public is a Public PaaS.
Thanks.
Your information that you upload or deploy to your gear on OpenShift is private to your gear(s), Red Hat/OpenShift employees will not access the data on your gear unless you give us your permission.
if you use GIT for this, you can deploy gitolite on virtual private server for this.
I think that a lot of collaboration tools can be deployed on VPSes or, if you do not trust it, you can bye your own PC and use it as server. I do the same, i have Rapsberry PI pc with some git repos and tasktracker / calendar/ LDAP applicaitons, that is used by me and my team.