Blue-Green System: How to handle databases? - blue-green-deployment

I deployed two different versions of application to two eks clusters, blue and green. I plan to use route53 weighted routing policy to switch between blue and green. The microservices inside blue and green eks clusters immediately reading and updating databases once I deployed the application at blue and green. We can only have one application accessing database at one time. How to do it?

Related

How do we implement leader selection for multiple replicas Acitve-Standby(High Availability) with sidecar pattern

How to make one only application pod as the leader node out of 2 replicas. When i start the pod with two replicas using load balancer it just starts. But i want to have an approach only one should be active and other should be standby. In case if Active gets down, standby has to make it as active in kubernetes

Connection pooling - what gives?

Ok, I have a web app on AWS that is using load balancing and autoscaling over two regions. When i check my connections, it seems as if connection pooling is not being used? Or is it just me that doesn't understand it properly? I have attached an image, and I have the following questions:
Just FYI: The colors indicated the same numbers, so the yellow ones are the same IP and so on. Just wanted to blur them, because who knows, right?
Also: the green connections comes from my computer, that is me using Workbench. The red/yellow I assume are my two servers, and the blue I am not sure. Will check these IPs.
1) Why does the bottom red connection and bottom yellow have a connection each? its the same IP, with the same user, calling the same DB.
2) Why are ports being added to the host IP? Is this what is causing new connections to spawn?
3) The two top green connections are also not sharing the same connection? Why two separate ones?
4) Why is the Id (purple outline) same for all connections?

Kubernetes multiple database instances or HA single instance

I have an Kubernetes environment running multipe applications (services). Now i'm a little bit confused how to setup the MySQL database instance(s).
According to different sources each microservice should have there own database. Should i create a single MySQL statefulset in HA mode running multiple databases OR should i deploy a separate MySQL instance for each application (service) running one database each.
My first thought would be the first option hence where should HA oterwise be usefull for? Would like to hear some differente views on this.
Slightly subjective question, but here's what we have setup. Hopefully, that will help you build a case. I'm sure someone would have a different opinion, and that might be equally valid too:
We deploy about 70 microservices, each with it's own database ("schema"), and it's own JDBC URL (defined via a service). Each microservice has it's own endpoint and credentials that we do not share between microservices. So in effect, we have kept the design to be completely independent across the microservices as far as the schema is concerned.
Deployment-wise, however, we have opted to go with a single database instance for hosting all databases (or "schemas"). While technically, we could deploy each database on its own database instance, we chose not to do it for few main reasons:
Cost overhead: Running separate database instances for each microservice would add a lot of "fixed" costs. This may not be directly relevant to you if you are simply starting the database as a MySQL Docker container (we use a separate database service, such as RDS or Google Cloud SQL). But even in the case of MySQL as a Docker container, you might end up having a non-trivial cost if you run, for example, 70 separate containers one per microservice.
Administration overhead: Given that databases are usually quite involved (disk space, IIOPs, backup/archiving, purge, upgrades and other administration activities), having separate database instances -- or Docker container instances -- may put a significant toll on your admin or operations teams, especially if you have a large number of microservices
Security: Databases are usually also critical when it comes to security as the "truth" usually goes in the DB. Keeping encryption, TLS configuration and strengths of credentials aside (as they should be of utmost importance regardless of your deployment model), security considerations, reviews, audits and logging will bring in significant challenges if your databases instances are too many.
Ease of development: Relatively less critical in the grand scheme of things, but significant, nonetheless. Unless you are thinking of coming up with a different model for development (and thus breaking the "dev-prod parity"), your developers may have a hard time figuring out the database endpoints for debugging even if they only need that information once-in-a-while.
So, my recommendation would be to go with a single database instance (Docker or otherwise), but keep the databases/schemas completely independent and inaccessible by the any microservice but the "owner" microservice.
If you are deploying MySQL as Docker container(s), go with a StatefulSet for persistence. Define an external pvc so that you can always preserve the data, no matter what happens to your pods or even your cluster. Of course, if you run 'active-active', you will need to ensure clustering between your nodes, but we do run it in 'active-passive' mode, so we keep the replica count to 1 given we only use MySQL Docker container alternative for our test environments to save costs of external DBaaS service where it's not required.

Solution for 1 GCP network-to-many GCP networks VPN topologies that addresses internal IP ambiguity

I have a problem where our firm has many GCP projects, and I need to expose services on my project to these distinct GCP projects. Firewalling in individual IPs isn't really sustainable, as we dynamically spin up and tear down hundreds of GCE VMs a day.
I've successfully joined a network from my project to another project via GCP's VPN, but I'm not sure what the best practice should be joining multiple networks to my single network, especially since most of the firm has the same default internal address subnetwork range for the project's default network. I understand that doing it the way that I am will probably work (it's unclear if it'll actually reach the right network, though), but this creates a huge ambiguity in terms of IP collisions, where potentially two VMs could exists in separate networks and have the same internal IP.
I've read that outside of the cloud, most VPNs support NAT remapping, which seems to let you remap the internal IP space of the remote peer's subnet (like, 10.240.* to 11.240.*), such that you can never have ambiguity from the peer doing the remapping.
I also know that Cloud Router may be an option, but it seems like a solution to a very specific problem that doesn't fully encompass this one: dynamically adding and removing subnets to the VPN.
Thanks.
I think you will need to utilize the custom subnet mode network (non-default), specify non-overlapping IP ranges for the networks to avoid collision. See "Creating a new network with custom subnet ranges" in this doc: https://cloud.google.com/compute/docs/subnetworks#networks_and_subnetworks

Couchbase XDCR and haproxy

I intend to setup a Couchbase system with two cluster: the main cluster is active and another one for backup (use XDCR to replicate). Use haproxy in front of this Couchbase system to switch (manual) from active cluster to backup cluster when active cluster down.
Before test, i want to ask some advice for this topology. Is there any problem with this. Can i run smoothly in production environment???
I thought i can not use vbucket awareness client in this topology. Because client only know haproxy, i can not send direct request from client to couchbase server (has vbucket for specific document). Is that right???
From your scenario it sounds like overhead. Why would you keep "stand by" cluster as a backup?
Instead, you can have all four instances of couchbase servers as one cluster (each instance running on its own box)...so you will take full advantage of vBucket architecture that it will be native-managed. If one of the instances is down, you will have no loss of data since the enabled replication will have mirror copy in the other nodes.
We use this setup in production with no issues. From time to time we bring one of the instances down for maintenance and the rest of the cluster still runs and its completely transparent to the Couchbase clients, e.g. no down time!
In my opinion XDCR makes sense for geographically separated locations (so you keep one cluster in Americas another in EMEA and so on). If all your instances in the same location, then Couchbase cluster technology will deliver high-availability (HA) with fail-over support already build in.