IKE version and encryption for GCP VPN - google-compute-engine

I need to migrate an existing VPN peer to GCP, my current parameters are:
transform: esp-3des esp-sha-hmac no compression, settings ={L2L, Tunnel, IKEv1, }
I read in GCP documentation that the GCP VPN doesn't support 3des with IKEv1 (https://cloud.google.com/vpn/docs/concepts/supported-ike-ciphers)
What is the good settings for IKEv2?

Any device or services that support IKE ciphers for IKE version 1 or 2 [1] should be compatible with Cloud VPN. Each vendor has its own specific instructions for VPN configuration, however, GCP VPN interoperability guides provides Google-tested interoperability guides and vendor-specific notes for peer devices or services that you can use to connect to Cloud VPN.
[1]Supported IKE ciphers: https://cloud.google.com/vpn/docs/concepts/supported-ike-ciphers
[2] VPN interoperability guides: https://cloud.google.com/vpn/docs/how-to/interop-guides

Related

Power Platform integration with Azure APIM previsioned in a VNet internal mode

We have an Azure APIM provision in a VNet internal mode as described in this article: Connect to an internal virtual network using Azure API Management | Microsoft Docs. We can successfully consume APIs in APIM with Postman and via the Developer Portal, from within the corporate network. However, we don’t have any connectivity between Power Platform and APIM; error message while testing a Custom Connector from Power Apps:
Can someone please point me in the right direction on how to enable comms between Power Platform and Azure APIM in VNet Internal mode. Any links and reference material are highly appreciated.
We decide on provisioning of Applcation Gateway with WAF applied in front of APIM that only allows traffic in from Power Platform. Reference blog post here: https://techcommunity.microsoft.com/t5/azure-paas-blog/apim-with-application-gateway-v1/ba-p/1795180.

Openshift4 installation on Virtualbox

Who has installed Openshift4 on Virtualbox VMs? How to bypass BMC limitations (BMC is required in install-config.yaml)?
https://docs.openshift.com/container-platform/4.6/installing/installing_bare_metal_ipi/ipi-install-installation-workflow.html#configuring-the-install-config-file_ipi-install-configuration-files
The prerequisites for an OCP 4.6 IPI (Installer Provisioned Infrastructure) install requires BMC access to each node.
With this setup a UPI (User Provisioned Infrastucture) deployment would be a better fit. You would need to set up the VMs and DNS entries before starting the deployment, as described in https://docs.openshift.com/container-platform/4.6/installing/installing_bare_metal/installing-bare-metal.html#installation-requirements-user-infra_installing-bare-metal.

Is there any tool in GCP to patch the Compute Instance?

I have some SUSE, RedHat and Cent OS VM's in Google Cloud. Now I want to patch these servers. Is there any GCP in-built tool or third party tool need to use ?
#Jannatul, you've asked about "GCP in-built tool or third party tool" in your question.
The answer to the first part of the question regards "GCP in-built tool" is "No". The OS deployment images in GCE are kept updated, but after deployment it's up-to-you how to keep VM instances patched. At this time Google does not provide any cloud service for that purpose since such a tool is out of scope of IaaS that the GCE actually is.
As for the second part ("third party tool"), an approach to Linux patching is not GCP-specific, it should be similar to that you use in the private datacenter. Since you use commercial Linux'es, including Red Hat and Suse, that vendors' solutions could work for your needs: for example Suse Manager or Red Hat Satellite (both originate from Spacewalk and support various Linux clients), as well as open-source Spacewalk Project solution itself.
GCP now has a built-in VM patching service, a part of VM Manager suite: https://cloud.google.com/compute/docs/os-patch-management
Users can get patch compliance reports and perform manual or automatic scheduled updates of Ubuntu, Debian, RHEL, SLES, Windows VMs.
Service is free for the first 100 VMs.

MySQL replication with masters and slaves in different Kubernetes clusters using Calico as CNI plugin

I have a Kubernetes cluster in which there are some MySQL databases.
I want to have a replication slave for each database in a different Kubernetes cluster in a different datacenter.
I'm using Calico as CNI plugin.
To make the replication process work, the slaves must be able to connect to the port 3306 of the master servers. And I would prefer to keep these connections the most isolated as possible.
I'm wondering about the best approach to manage this.
One of the ways to implement your idea is to use a new tool called Submariner.
Submariner enables direct networking between pods in different Kubernetes clusters on prem or in the cloud.
This new solution overcomes barriers to connectivity between Kubernetes clusters and allows for a host of new multi-cluster implementations, such as database replication within Kubernetes across geographic regions and deploying service mesh across clusters.
Key features of Submariner include:
Compatibility and connectivity with existing clusters: Users can deploy Submariner into existing Kubernetes clusters, with the addition of Layer-3 network connectivity between pods in different clusters.
Secure paths: Encrypted network connectivity is implemented using IPSec tunnels.
Various connectivity mechanisms: While IPsec is the default connectivity mechanism out of the box, Rancher will enable different inter-connectivity plugins in the near future.
Centralized broker : Users can register and maintain a set of healthy gateway nodes.
Flexible service discovery: Submariner provides service discovery across multiple Kubernetes clusters.
CNI compatibility: Works with popular CNI drivers such as Flannel and Calico.
Prerequisites to use it:
At least 3 Kubernetes clusters, one of which is designated to serve as the central broker that is accessible by all of your connected clusters; this can be one of your connected clusters, but comes with the limitation that the cluster is required to be up in order to facilitate inter-connectivity/negotiation
Different cluster/service CIDR's (as well as different Kubernetes DNS suffixes) between clusters. This is to prevent traffic selector/policy/routing conflicts.
Direct IP connectivity between instances through the internet (or on the same network if not running Submariner over the internet). Submariner supports 1:1 NAT setups, but has a few caveats/provider specific configuration instructions in this configuration.
Knowledge of each cluster's network configuration
Helm version that supports crd-install hook (v2.12.1+)
You can find more info with installation steps on submariner github.
Also, you may find rancher submariner multi-cluster article interesting and useful.
Good luck.

How to secure different fiware GE in the same virtual machine?

I'm deploying some Generic Enablers(Orion, Cygnus, Proton-Cep, Wirecloud) in the same VM using dockers.
Reading the fiware documentation it uses has an example a wilma proxy securing an instance of orion and getting the authorization through IdM.
Wilma configurations do not seem to support different redirections
I need to secure all these services that I'm using which need to be accessed from outside the server, my question is if is it possible to use Wilma to secure all Generic Enablers or should I implement one instance of Wilma for each service provided?