I have 3 masters in my Openshift cluster. After changing identity provider in master-config.yaml and restarting master-api and master-controller in one, it doesn't take any effect unless I copy this configuration to all masters in the cluster. I'm wondering why?
I think it is affected by master service HA architecture and they have each configuration so it needs to sync if they are changed. For example, the controller service access to a elected leader at a time. And as I remember, this architecture is changed as of v3.10, the configuration file managed as one configmap which shared with same node group, so node would not require to restart to take effec, but master service should needs to restart each master services if the configuration changed.
Related
I have been working on a project in Mysql Master slave replication. I want to setup a master slave replication between AWS and GCP where AWS has the AWS RDS as the master and the slave or replica is in the GCP side. But I want to create this replica on GCP side without publicly exposing the master instance on AWS. That means this should happen in a private network.
I have found solutions where we can create proxy for the master instance and then create replica on the GCP side using the Cloud SQL migration services. But this is not what what I want. I don't want to assign a proxy to the master instance.
The replica creation process should be within a private network.
What should I do next? Help.
Also, please do let me know if the question is still unclear.
Create a Transit Gateway between AWS VPC and GCP private network.
https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html
If private network on the master (AWS) is a must, then this won't be possible. The documentation about using Cloud SQL as External replica is clear on the requirements for the source:
Ensure that the source database server meets these configuration requirements:
An externally accessible IPv4 address and TCP port.
I am deploying active-active all in one in 2 separate servers with wso2-am 2.6.0 and wso2 analytics 2.6.0. I am configuring my servers by this link. In part 4 and 5 about rsync mechanism I have some questions:
1.how can I figure out that my server is working rsync or sync??
2.What will happen in future if I don't use rsync now and also don't use configuration on part 4 and 5 ?
1.how can I figure out that my server is working rsync or sync??
It is not really clear what are you asking for.. rsync is just a command to synchronize files in folders.
What is the rsync used for - when deploying an API, the gateway creates or updates a few synapse sequences or apis in the filesystem (repository/deployment/server) and these file updates need to be synchronized to all gateway nodes.
I personally don't advice using rsync, the whole issue is that you need to invoke regularly the rsynccommand to synchronize the files created by a master node. That creates certain delay for service availability and most important, if something goes wrong and you want to use another node as the master, you need to switch the rsync direction, which is not really automated process.
We usually keep it simple using a shared filesystem (nfs, gluster, ..) and then we have all active-active setup (ok, setting up HA NFS or glusterFS is not particulary simple, but that's usually job of the infra guys)
2.What will happen in future if I don't use rsync now and also don't use configuration on part 4 and 5 ?
In the case the filesystems between gateways is not synced or shared - you deploy an api from the publisher to a single gateway node, but other gateway nodes won't create the synapse sequences and api artefacts. As a result the other nodes won't pass the client request to the backend
The documentation says that only pods that are managed by a Replication Controller will be restarted after a Kubernetes cluster update on Google Container Engine.
What about the pods that are managed by a Deployment?
In this case the language is too precise. Any pods that are managed by a controller (Replication Controller, Replica Set, Daemon Set, Deployment, etc) will be restarted. The warning is for folks that have created Pods without a corresponding controller. Because nodes are replaced with new nodes (rather than upgraded in place), Pods without a controller ensuring that they remain running will just disappear.
i've installed an arangodb instance on a virtual machine of Google Cloud (tcp://10.240.0.2). I would setup an asymmetrical cluster with another vm where i've installed arangodb (tcp://10.240.0.3).
I follow the official guide to config the production scenario: 1 coordinator and 1 DBServer on the same machine
I tried also a second configuration to cluster with two vm instances, but it doesn't work, showing this error in the GoogleChromeConsole :
{"error":true,"code":500,"errorNum":500,
"errorMessage":"Cannot check port on dispatcher tcp://10.240.0.3:8529"}
Here you can find the configurations that I have tried
What could be the error?
PS: I've open in the firewall the ports: 8529,8530,8629
Thanks in advance.
Daniele
Have you installed ArangoDB on both virtual machines and changed the configuration (on both) to set
[cluster]
disable-dispatcher-kickstarter = false
disable-dispatcher-frontend = false
and then restarted the database servers? I assume so, since you get "Connection OK" for both servers. Your browser would then talk to the first dispatcher, which in turn will contact the second one. The error message you get suggests that this latter step does not work, since checking ports is the first request the first dispatcher would send to the second one.
Is it possible that processes in the first VM cannot access tcp://10.240.0.3:8529 on the second VM? Maybe the respective other subnets are not routed from within the VMs?
Furthermore, when you have got this to work, you will almost certainly also need port 4001 on the first VM, because that is where our etcd (Agency) will listen. In addition, the ports 8530 and 8629 are the defaults which are tried first. If they are not usable for some reason, the dispatchers will use subsequent port numbers instead to assign them to the coordinators and DBservers. In that case you would have to open these as well, at least from the respective other VM.
I am new to GCE. I was able to create new instance using gcutil tool and GCE console. There are few questions unclear to me and need help:
1) Does GCE provides persistent disk when a new instance is created? I think its 10GB by default, not sure though. What is the right way to stop the instance without loosing data saved on it and what will be the charge (US zone) if say I need 20GB of disk space for that?
2) If I need SSL to enable HTTPS, is there any extra step I should do? I think I will need to add firewall as per the gcutil addfirewall command and create certificate (or install it from third part) ?
1) Persistent disk is definitely the way to go if you want a root drive on which data retention is independent of the life cycle of any virtual machine. When you create a Compute Engine instance via the Google Cloud Console, the “Boot Source” pull-down menu presents the following options for your boot device:
New persistent disk from image
New persistent disk from snapshot
Existing persistent disk
Scratch disk from image (not recommended)
The default option is the first one ("New persistent disk from image"), which creates a new 10 GB PD, named after your instance name with a 'boot-' prefix. You could also separately create a persistent disk and then select the "Existing persistent disk" option (along with the name of your existing disk) to use an existing PD as a boot device. In that case, your PD needs to have been pre-loaded with an image.
Re: your question about cost of a 20 GB PD, here are the PD pricing details.
Read more about Compute Engine persistent disks.
2) You can serve SSL/HTTPS traffic from a GCE instance. As you noted, you'll need to configure a firewall to allow your incoming SSL traffic (typically port 443) and you'll need to configure https service on your web server and install your desired certificate(s).
Read more about Compute Engine networking and firewalls.
As alternative approach i would suggest deploying VMs using Bitnami. There are many stacks you can choose from. This will save you time when deploying the VM. I would suggest you go with the SSD disks, as the pricing is close between magnetic disks and SSDs, but the performance boost is huge.
As for serving the content over SSL, you need to figure out how will the requests be processed. You can use NGINX or Apache servers. In this case you would need to configure the virtual hosts for default ports - 80 for non-encrypted and 443 for SSL traffic.
The easiest way to serve SSL traffic from your VM is generate SSL certificates using the Letsencrypt service.