Can we create multiple containers that host a single shared database in Docker, and if we can create it will we face any issues with that multiple instance on select event (for example)?
Thank you.
Allow me to paraphrase your question a bit. Please correct me if I have misunderstood anything.
Q: Can I run multi instances of MySQL database using Docker technology?
A: Short answer: Yes because a docker container is just a process on your machine.
Q: If I have multi instances of MySQL database running on the same host, how does it know which instance am I performing my query on?
A: Well it all depends on the connection string you set for your database client.
Every database instances will have a corresponding listener process that is bind to a specific port of the host.
Now, each port can only be bind to a process. It is a 1 to 1 relation.
Essentially if you have 10 SQL instances installed, they will be bind to an unique port each. So the port number you defined in your connection string determines the database instance you'll be talking to.
Something worth noting is that, docker containersare self-contained. You can sort-of see them as conventional virtual machine except that they are much more light-weighted. That is, a container will have its own networking infrastructure, similar to your physical host. So for your physical host to be able to see the containerized database, you'll have to port-forward the bind port.
If the paragraph above doesn't make any sense to you then I'll recommend you to expore docker's ports or -p options for abit.
See: https://docs.docker.com/engine/userguide/networking/default_network/binding/
Related
I have setup MySQL InnoDB Cluster latest release (8.0.27) with three nodes with single primary. And I have a VB script and connection string for it.
Current connection string is like :
jdbc:mysql://node1,node2,node3;db=test?multi_host=true
Assume my primary node1 goes down, R/W access will be passed onto either node2 or node3 becoming primary. During this my connection string won't work as it tries to connect to first node and fails.
Is there any other parameter which can be passed in connect string to handle such issues?
How does connect string gets to know which node is primary and connect to it.
Thanks.
An InnoDB Cluster usually runs in a single-primary mode, with one primary instance (read-write) and multiple secondary instances (read-only).
In order for client applications to handle failover, they need to be aware of the InnoDB cluster topology. They also need to know which instance is the PRIMARY. While it is possible for applications to implement that logic, MySQL Router can provide this functionality for you.
shell> mysqlrouter --bootstrap root#localhost:3310
MySQL Router connects to the InnoDB cluster, fetches its metadata and configures itself for use. The generated configuration creates 2 TCP ports: one for read-write sessions (which redirect connections to the PRIMARY) and one for read-only sessions (which redirect connections to one of the SECONDARY instances).
Once bootstrapped and configured, start MySQL Router (or set up a service for it to start automatically when the system boots):
shell> mysqlrouter &
You can now connect a MySQL client, such as MySQL Shell to one of the incoming MySQL Router ports and see how the client gets transparently connected to one of the InnoDB cluster instances.
shell> mysqlsh --uri root#localhost:6442
However, when primary node fails, you can just read data and write cannot work. If you want write to work, see High Availability and Multi-Primary cluster for MySql.
See this for more detail.
I have a container with MySQL that is configured to start with "-v /data:/var/lib/mysql" and therefore persists data between container restarts in the separate folder. Although this approach has some drawbacks, in particular, the user may not have write permissions for a specified directory. How exactly container should be reconfigured in order to use Docker's implicit per-container storage to save MySQL data in the /var/lib/docker/volumes in order to reuse it after the container is stopped and started again? Or is it better to consider another persistence options?
What you show is called bind mounts.
What you request is called volumes.
Just create volume and connect it
docker volume create foo
docker run ... -v foo:/var/lib/mysql <image> <command>
And you've done it! You can connect it to many containers at will.
I've set up a Percona Xtradb Cluster with 5 nodes on a network that also has a ProxySQL server. I have ProxySQL working, I can log in to the admin interface on port 6032 and administer it and I can also log in through port 6033, connecting to the cluster.
The problem (at least as I see it) is that I am only able to get through the proxy to the cluster (port 6033) by duplicating the user/pass for the cluster at the proxysql level.
I would have thought that there would be some way to have the credentials simply pass through the proxy to the cluster or at least some other way to not have to store the user/pass in two points for these connections.
Is this all exactly by design and I'm just hoping for something that doesn't exist because of good reasons like security/practices or is there some way to improve this setup to not have to tell ProxySQL about every database user I ever need to access the cluster databases?
in short - yes. it's simply the way ProxySQL handles queries.
Also,if security is one of your concerns you may think of password hashing on ProxySQL side.
Here's the official doc: Password management on how to configure.
From the Wiki:
Because ProxySQL performs routing based on traffic, when a client
connects it cannot yet identify a destination HG, therefore ProxySQL
needs to authenticate the client. For this reason, it needs to have
some information related to the password of the user: enough
information to allow the authentication.
From the docs # http://docs.ejabberd.im/admin/guide/clustering/#clustering-setup
Adding a node into the cluster is done by starting a new ejabberd node within the same network, and running a command from a cluster node. On second node for example, as ejabberd is already started, run the following command as the ejabberd daemon user, using the ejabberdctl script: ejabberdctl join_cluster 'ejabberd#first'
How does this translate into deployment in the cloud- where instances can (hopefully) be shutdown/restarted based on a consistent image and behind a load balancer?
Can all of them, including the initial instance, use "example.com" as "first" in the example above (assuming "example.com" is setup in DNS to point to the cloud load balancer)?
Or must the first instance not attempt to join a cluster, and subsequent ones all use the ip address of that initial instance instead of "first" (and if this is the case- does it get wacky if that initial instance goes down)?
Can all of them, including the initial instance, use "example.com" as "first" in the example above (assuming "example.com" is setup in DNS to point to the cloud load balancer)?
No, the node name parameter is the node name of an Erlang ejabberd node. It should even more be on the internal Amazon network, not the public one, so it should not rely on a central DNS. It must be a name of an Erlang node as the newly started node with connect to the existing node to share the same "cluster schema" and do an initial sync of the data.
So, the deployment is done as follow:
first instance does not need to join a cluster indeed as there is no cluster schema to share.
New instance can use the node name of any other node of the cluster. It means they will add themselves to the ejabberd cluster schema. It means ejabberd knows that users can be on any node of this cluster. You can point to any running node in the cluster to add a new one, as they are all equivalent (there is no master).
You still need to configure the load balancer to balance traffic to public XMPP port on all nodes.
You only need to perform the cluster config for each once for each extra cluster node. The configuration with all the node is kept locally, so when you stop and restart a node, it will then automatically rejoined the cluster after it has been properly set up.
Several months ago, I followed http://aws.amazon.com/articles/1663 and got it all running. Then, my PC crashed and I lost the keypair (http://stackoverflow.com/questions/7949835/accessing-ec2-instance-after-losing-keypair) and could no longer access the instance.
I want to now launch a new instance and mount this MySQL/DB volume which is left over from before and see if I can get to the data on it. How can I go about doing that?
You outlined the correct approach to this problem already, and the author of the article you referenced, Eric Hammond, has written another one detailing this very process, see Fixing Files on the Root EBS Volume of an EC2 Instance - it boils down to:
start another EC2 instance
stop the EC2 instance you can't access anymore
detach the EBS volume from the stopped instance
attach the EBS volume to the running instance
SSH into the running instance
mount the EBS volume in the running instance
perform whatever fixes necessary, i.e. adjust the /var permissions in your case
Please see Eric's instructions for details on how to do this from the command line; obviously you can achieve all steps up to the SSH access via the AWS Management Console as well, removing the need to install the Amazon EC2 API Tools, in case they aren't readily available already.