How can I shard MySQL database with Vitess using both Docker images? - mysql

I found out about Vitess which let you shard MySQL database.
I want to use the docker image of both MariaDB and Vitess but I'm not quite sure what to do next. I'm using CentOS 7.
I installed the images
docker pull mariadb
docker pull vitess/root
docker pull vitess/orchestrator
Log inside the vitess image
sudo docker run -ti vitess/root bash
As the website said, make build
make build
I set up the variables
export VTROOT=/vt
export VTDATAROOT=/vt/vtdataroot
The manual said it was in the home directory but in the image it's at root.
But after that I'm stuck. I laucnh zookeeper : ./zk-up.sh
Starting zk servers... Waiting for zk servers to be ready... Started zk servers. ERROR: logging before flag.Parse: E0412
00:31:26.378586 132 syslogger.go:122] can't connect to syslog
W0412 00:31:26.382527 132 vtctl.go:80] cannot connect to syslog:
Unix syslog delivery error Configured zk servers.
Oops, okay, let's continue...
./vtctld-up.sh for the web internace
Starting vtctld...
Access vtctld web UI at http://88bdaff4e181:15000
Obviously I cannot access that link since it's in docker on a headless server
./vttablet-up.sh suppose to bring up 3 vttablets, but MariaDB is in another docker, not yet started and if I open the file it is not apparent how to set it up.
Is there any MySQL or PostgreSQL sharding solution more easily installable?
Or how can I set this up?
(Docker noob here sorry)
Thanks!

If you need multiple container orchestrated, best bet is to use docker-compose. You can define all the application dependencies as separate containers and network them to be accessible from each other.

Related

How to link a running docker container with mysql?

I want to connect my already running jenkins container with mysql database which is another container. Have created a database named jenkins and user named jenkins in mysql.
Can it be done without using the run command coz run installs a fresh image and i want to use the existing one.
You can use docker network connect to connect both containers to the same network, so they can communicate. See docker network connect

I get an error EHOSTUNREACH when trying to connect 1 openshift application in NodeJS with MySQL of another openshift application

I have 2 applications on OpenShift: 1 with MySQL, and 1 with NodeJS that is going to connect to MySQL of the other app.
I've seen examples but none of them seem to work, these are the steps I'm taking:
rhc ssh -a mydbappname
then i get the enviroment variables with
env | grep MYSQL
I get something like:
OPENSHIFT_MYSQL_DB_HOST=127.XX.XXX.X
OPENSHIFT_MYSQL_DB_PORT=3306
After that i try to use those on the other app, but it always throws EHOSTUNREACH, no matter if i create the OPENSHIFT_MYSQL_DB env variables on the Node app and use them, or if i put it directly on the code.
I have seen in other parts that the OPENSHIFT_MYSQL_DB_HOST is something like 54d10be2503e36378e0002db-mydbappname.apps.rhcloud.com.
If I use the port-forward and use 127.0.0.1 with the local port selected for mysql, and start the NodeJS application locally it works, only when i upload the changes to Openshift it fails
In this article:
https://blog.openshift.com/sharing-database-across-applications/
You can read:
"Step 1: Create an application with a database
We will create a scalable PHP application using a MySQL database cartridge. In non-scalable applications, the database will be installed in the same gear as the application. In this case we want the database to be accessible from other gears. So creating a scalable application ensures that the database runs in its own gear that can be accessed from other gears."
So maybe this is the problem, check if this application you created is scalable.

How to use Navicat to connect to the MySQL database in Openshift

I'm using openshift to build my apps.
And I add mysql to my gear.
but, if I want to reach my database. I can't use Navicat which is my usual way to manage my database. I must ssh to my openshift server and then use command line 'mysql' to reach my database which is a bad way compared to Navicat.
So, how can I reach my database in Openshift with Navicat?
I've used env | grep MYSQL to get my mysql configration and use it in Navicat.
However, all is none effect.
If its a scalable application you should be able to connect to it externally via the connection information supplied by the environment variables. If its not a scalable app, then you'll need to use the rhc port-forward command to forward the necessary ports needed to connect.
Take a look at the following article here for more information. https://www.openshift.com/blogs/getting-started-with-port-forwarding-on-openshift

How To start multiple mySQL instances on boot

I currently have 3 mySQL instances on my Linux Centos 64 bit server. When the server boots up it only starts the first mySQL instance. At that point I have to "mysqld_multi stop" then "mysqld_multi start" to ensure all 3 are started. Is there anyway Linux can start all 3 up at run time so I don't have to do this every time I reboot the server.
you can use crontab #reboot option:
http://unixhelp.ed.ac.uk/CGI/man-cgi?crontab+5
After researching this a little bit, it looks like you don't have too many options and they aren't as elegant as I would like.
Creating a separate mysql server instance
Running Multiple MySQL 5.6 Instances on one server in CentOS 6/RHEL 6/Fedora
This link explains pretty well how to create another MySQL server instance that starts at boot time. You could definitely do things better than he describes, but this is a start. Essentially he copies the /etc/init.d/mysqld startup script and the /etc/my.cnf configuration file and has the new startup script reference the new configuration file.
Creating a unified startup script
You could also chkconfig mysqld off to not use mysql's built-in startup script and create your own that runs your mysqld_multi commands at boot time.
Let me know if you are looking for more information.

MySQL Cluster + Manager and NDB/J

I've been trying to setup a MySQL Cluster for a few days using the MySQL Cluster Manager on 3 Ubuntu nodes (3 identical VM instances with 1GB RAM each).
I've followed the video on MySQL Cluster Manager on the MySQL site. There's not much other documentation/tutorials on it (probably because it's a commercial product).
I start the cluster and show the status, but the mysqld nodes never start, they just remain as "added". If I install mysql-server using "sudo apt-get install mysql-server" then I get the normal local server running and the nodes register as "started", but I can't see how to connect to the cluster rather than the individual MySQL servers running on the mysqld nodes.
I'm also at a loss as to how the Java connector for MySQL Cluster is organised, it appears that there are multiple libraries so I don't even know which library I need or how to get them (some are created when compiling MySQL Cluster???). Could someone please explain how the connectors work to interact with NDB from Java and how to get them?
Thanks for any answers.
First of all, the official documentation for MySQL Cluster Manager can be found by navigating to the Cluster documentation on dev.mysql.com (called "MySQL Cluster Manager"). You are correct that MySQL Cluster Manager is commercial software although MySQL Cluster itself is available under a commercial or GPL license.
It sounds as though you've already configured the agents and have them running and so if you want to get a Cluster up and running quickly then refer to this simple worked example of using MySQL Cluster Manager
In terms of understanding why the MySQL Servers (mysqlds) are not starting up, there aren't many clues in your question and so we need to narrow it down (one reason could be if you had multiple mysqlds defined on the same host that are trying to use the default port (3306)).
To check what the manager has been doing, take a look in the file called mysql-cluster-manager.log. You can adjust the level of logging using the cluster manager configuration file.
To see what MySQL Cluster itself thinks has happened, check the directories storing the cluster data files (if you haven't over-written the defaults then this would be under /clusters/ and then you'll see a directory for each node in the cluster). The first one to check is ndb__cluster.log and other logs that you'll find in the "data" sub-directory of the id associated with the ndb_mgmd node. There will also be per-node log files so also check the mysqld_out.err and mysqld_out.log files stored in the data directory associated with mysqld node-ids.
Most important point is do not use the mysqld that gets installed with "sudo apt-get install mysql-server" as this version will not be compatible with MySQL Cluster - always use the binaries that come with the MySQL Cluster tar ball (or if using Cluster Manager that should be transparent to you anyway.
Note that if you want to get MySQL Cluster up and running on a single host without MySQL Cluster Manager then refer to the quick-start guide located on the MySQL Cluster download site (on mysql.com rather than e-Delivery).
For the java access, try out this MySQL Cluster ClusterJ tutorial.