MYSQL InnoDB Cluster Connect String parameters - mysql

I have setup MySQL InnoDB Cluster latest release (8.0.27) with three nodes with single primary. And I have a VB script and connection string for it.
Current connection string is like :
jdbc:mysql://node1,node2,node3;db=test?multi_host=true
Assume my primary node1 goes down, R/W access will be passed onto either node2 or node3 becoming primary. During this my connection string won't work as it tries to connect to first node and fails.
Is there any other parameter which can be passed in connect string to handle such issues?
How does connect string gets to know which node is primary and connect to it.
Thanks.

An InnoDB Cluster usually runs in a single-primary mode, with one primary instance (read-write) and multiple secondary instances (read-only).
In order for client applications to handle failover, they need to be aware of the InnoDB cluster topology. They also need to know which instance is the PRIMARY. While it is possible for applications to implement that logic, MySQL Router can provide this functionality for you.
shell> mysqlrouter --bootstrap root#localhost:3310
MySQL Router connects to the InnoDB cluster, fetches its metadata and configures itself for use. The generated configuration creates 2 TCP ports: one for read-write sessions (which redirect connections to the PRIMARY) and one for read-only sessions (which redirect connections to one of the SECONDARY instances).
Once bootstrapped and configured, start MySQL Router (or set up a service for it to start automatically when the system boots):
shell> mysqlrouter &
You can now connect a MySQL client, such as MySQL Shell to one of the incoming MySQL Router ports and see how the client gets transparently connected to one of the InnoDB cluster instances.
shell> mysqlsh --uri root#localhost:6442
However, when primary node fails, you can just read data and write cannot work. If you want write to work, see High Availability and Multi-Primary cluster for MySql.
See this for more detail.

Related

Amazon RDS switch server between Reader instance and Writer instance

I am using Amazon Aurora and I have 2 database server by default:
The Reader instance and the Writer instance.
My application is connecting to the primary connection endpoint
sample.cluster-sample.us-west-2.rds.amazonaws.com
However, my application can't write data into the database suddenly and I found the replica (sample-instance-r1) has become Writer instance.
My application is programmed using Node.js with mysql plugin and using connection pool. How can I avoid amazon RDS switch the Writer instance and thus not able to write data?
The cluster endpoint address does not change due to failover:
To use a connection string that stays the same even when a failover promotes a new primary instance, you connect to the cluster endpoint. The cluster endpoint always represents the current primary instance in the cluster.
So what you are describing does not normally happen when you are using cluster endpoint (not counting the time required for the failover to compete). Thus please make sure that you application is actually using that endpoint as Aurora has multiple endpoints.

Trying to create two MySQL pods in kubernetes with same volume for high availability

I am trying to deploy two MySQL pods with the same PVC, but I am getting CrashLoopBackoff state when I create the second pod with the error in logs: "innoDB check that you do not already have another mysqld process using the same innodb log files". How to resolve this error?
There are different options to solve high availability. If you are running kubernetes with an infrastructure that can provision the volume to different nodes (f.e. in the cloud) and your pod/node crashes, kubernetes will restart the database on a different node with the same volume. Aside from a short downtime you will have the database back up running in a relatively short time.
The volume will be mounted to a single running mysql pod to prevent data corruption from concurrent access. (This is what mysql notices in your scenario as well, since it is no designed for shared storage as HA solution)
If you need more you can use the built in replication of mysql to create a mysql 'cluster' which can be used even if one node/pod should fail. Each instance of the mysql cluster will have an individual volume in that case. Look at the kubernetes stateful set example for this scenario: https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/

Docker : Multiple Instances for One MySQL Database

Can we create multiple containers that host a single shared database in Docker, and if we can create it will we face any issues with that multiple instance on select event (for example)?
Thank you.
Allow me to paraphrase your question a bit. Please correct me if I have misunderstood anything.
Q: Can I run multi instances of MySQL database using Docker technology?
A: Short answer: Yes because a docker container is just a process on your machine.
Q: If I have multi instances of MySQL database running on the same host, how does it know which instance am I performing my query on?
A: Well it all depends on the connection string you set for your database client.
Every database instances will have a corresponding listener process that is bind to a specific port of the host.
Now, each port can only be bind to a process. It is a 1 to 1 relation.
Essentially if you have 10 SQL instances installed, they will be bind to an unique port each. So the port number you defined in your connection string determines the database instance you'll be talking to.
Something worth noting is that, docker containersare self-contained. You can sort-of see them as conventional virtual machine except that they are much more light-weighted. That is, a container will have its own networking infrastructure, similar to your physical host. So for your physical host to be able to see the containerized database, you'll have to port-forward the bind port.
If the paragraph above doesn't make any sense to you then I'll recommend you to expore docker's ports or -p options for abit.
See: https://docs.docker.com/engine/userguide/networking/default_network/binding/

Is Database should be distributed when I use it as a persistence storage for Storm in Distributed Mode

I have a Storm cluster which consists of Nimbus and 4 Supervisors, and I have MySQL installed on the same node as Nimbus:
Cluster information
Nimbus - 192.168.0.1
Supervisors - 192.168.0.2 ~ 5
MySQL - same as the Nimbus, bind to 0.0.0.0 (so that I can connect remotely)
I am trying to update MySQL table in realtime, so if my bolt is running, say, on ...4 node, how does this node(bolt) sends data (update) to the MySQL server which is running on another node. In Hadoop, we have HDFS which is available on all nodes of a cluster, my question is Do I need some Distributed Storage for store tuples or I should make some configuration changes to my MySQL or Storm topology
You should be able to open a database connection from each node to your MySQL installation. The connection will go over the network, thus, you can update your DB remotely.

persistently replicating RDS MySQL database to external slave

AWS now allows you to replicate data from an RDS instance to an external MySQL database.
However, according to the docs:
Replication to an instance of MySQL running external to Amazon RDS is only supported during the time it takes to export a database from a MySQL DB instance. The replication should be terminated when the data has been exported and applications can start accessing the external instance.
Is there a reason for this? Can I choose to ignore this if I want the replication to be persistent and permanent? Or does AWS enforce this somehow? If so, are there any work-arounds?
It doesn't look like Amazon explicitly states why they don't support ongoing replication other than the statement you quoted. In my experience, if AWS doesn't explicitly document a reason for why they do something then you're not likely to find out unless they decide to document it at a later time.
My guess would be that it has to do with the dynamic nature of Amazon instances and how they operate within RDS. RDS instances can have their IP address change suddenly without warning. We've encountered that on more than one occasion with the RDS instances that we run. According to the RDS Best Practices guide :
If your client application is caching the DNS data of your DB instances, set a TTL of less than 30 seconds. Because the underlying IP address of a DB instance can change after a failover, caching the DNS data for an extended time can lead to connection failures if your application tries to connect to an IP address that no longer is in service.
Given that RDS instances can and do change their IP address from time to time my guess is that they simply want to avoid the possibility of having to support people who set up external replication only to have it suddenly break if/when an RDS instance gets assigned a new IP address. Unless you set the replication user and any firewalls protecting your external mysql server to be pretty wide open then replication could suddenly stop if the RDS master reboots for any reason (maintenance, hardware failure, etc). From a security point of view, opening up your replication user and firewall port like that are not a good idea.