I have 2 computer running on Window 7 and I wanted to try MySQL database replication.
Configuration are as followed.
2 mgt node
2 data node
2 mysqlid node
I have configure the file config.ini
[ndbd default]
noofreplicas=2
[ndbd]
hostname=192.168.0.1
NodeId=1
[ndbd]
hostname=192.168.0.2
NodeId=2
[ndb_mgmd]
hostname=192.168.0.1
NodeId=101
[ndb_mgmd]
hostname=192.168.0.2
NodeId=102
[mysqld]
NodeId=51
hostname=192.168.0.1
[mysqld]
NodeId=52
hostname=192.168.0.2
and the my.cnf(.1)
[mysqld]
ndb-nodeid=51
ndbcluster
datadir=c:\\Users\\Brian\\my_cluster\\mysqld_data
basedir=c:\\Users\\Brian\\mysqlc
port=5000
server-id=51
log-bin
and the my.cnf(.2)
[mysqld]
ndb-nodeid=52
ndbcluster
datadir=c:\\Users\\Brian\\my_cluster\\mysqld_data
basedir=c:\\Users\\Brian\\mysqlc
port=5000
server-id=52
log-bin
however, when i login to the database of the node(master) and created a new database and inserted a new table, i realised that the other node is not able to sync it.
Could someone help me address this issue?
Brian,
first of all, just to check terminology - it appears that you're trying to use "MySQL Cluster" (where the data is held - and synchronously replicated - in the data nodes rather than in the mysqlds) rather "MySQL Replication" (which asynchronously replicates data between mysqlds).
I'll assume that the mysqlds are the ones that came with the MySQL Cluster package ('regular' mysqlds will not work correctly).
In order for the tables to be stored in the data nodes (and hence be visible through all mysqlds) rather than in the local mysqld, you must specify that MySQL Cluster is the storage engine to be used: "CREATE TABLE mytab (....) ENGINE=NDBCLUSTER;". If you don't do that then the tables will be created with the InnoDB storage engine and will be local to each mysqld.
Note that if you want your MySQL Cluster to be completely fault tolerant then you should move the ndb_mgmds to their own host(s) rather than co-locating them with data nodes; the reason for this is explained in MySQL Cluster fault tolerance – impact of deployment decisions.
Andrew.
Related
We are setting ndb cluster to ndb cluster replication. from mysql documentation I found the below
https://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-replication-issues.html
Using --binlog-ignore-db=mysql means that no changes to tables in the mysql database are written to the binary log. In this case, you should also use --replicate-do-table=mysql.ndb_apply_status to ensure that mysql.ndb_apply_status is replicated.
but when I am setting --binlog-ignore-db=mysql on master mysqld node and --replicate-do-table=mysql.ndb_apply_status on slave mysqld node, the application database updates are not getting getting replicated. only mysql.ndb_apply_status is getting replicated from source. if I remove --replicate-do-table=mysql.ndb_apply_status then both mysql.ndb_apply_status and application database are replicated but not sure why mysql documentation says to use --replicate-do-table=mysql.ndb_apply_status on slave node and not sure if it breaks anything if I use --binlog-ignore-db=mysql and not set --replicate-do-table=mysql.ndb_apply_status on slave. Any help?
I have read that Maxscale(BinLog Server) of MariaDB can be used for relaying the bin logs from a MySQL Cluster to a single BinLog Server, however i wanted to know if its possible to collect all the bin logs from different MySQL Cluster and persist on a single BinLog Server and no mysql slaves would be reading from it. In case its possible how are the conflicts like same database name in different MySQL Cluster, etc are handled?
The binlogrouter module in MaxScale is a replication proxy. It stores the binary logs of the master on the MaxScale server. Unlike a normal replication slave, the binlogrouter will not write its own binary log. This means that the binary logs on the MaxScale server will be identical to those on the original master server.
To collect binlogs from multiple clusters, you need to configure multiple binlogrouter services and point each one of them to a master server. The binary logs are stored separately for each configured service.
Here's an example configuration of a binlogrouter service and a listener:
[Replication-Router]
type=service
router=binlogrouter
version_string=10.0.17-log
router_options=server_id=4000,binlogdir=/var/lib/maxscale/,filestem=mysql-bin
user=maxuser
passwd=maxpwd
[Replication-Listener]
type=listener
service=Replication-Router
protocol=MySQLClient
port=3306
Read the Binlogrouter documentation for more information about the various options.
I have successfully synced my primary and secondary databases using Mysql Master - Master Replication( guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-mysql-master-master-replication ). Well I am stuck with a weird issue. Every time the mysql service is restarted the value of mysql-bin.xxxxxx file gets incremented(eg: mysql-bin.000005 to mysql-bin.000006) and it stops the master - master replication process. How can I prevent the mysql-bin file value rotation or is there any way to deal with this increment issue so that even if the rotation happens the replication wont get affected?
Thanks :)
Update
As usual I had to find the answer for my own question, here is the solution for those who are facing the same situation. Just added the line log_slave_updates=1 to my.cnf file in both the servers.
Master - Master Repication:
Primary my.cnf:
#bind-address=127.0.0.1
server-id=1
log_bin=/var/log/mysql/mysql-bin.log
binlog_do_db=db_name_1
binlog_do_db=db_name_2
binlog_do_db=db_name_3
max_binlog_size=100M
log_slave_updates=1
auto_increment_increment=2
auto_increment_offset=1
Secondary my.cnf:
#bind-address=127.0.0.1
server-id=2
log_bin=/var/log/mysql/mysql-bin.log
binlog_do_db=db_name_1
binlog_do_db=db_name_2
binlog_do_db=db_name_3
max_binlog_size=100M
log_slave_updates=1
auto-increment-increment=2
auto-increment-offset=2
I have a mysql ndb cluster (details below). The problem is when I do the simplest thing such as restore a database that was dumped using mysqldump, it take an absolute age! IE 6 hours to restore a db that's 745MB in size and has approx 2.7 million rows across about 30 tables, all pretty standard stuff.
I've looked for bottlenecks, no 1 cpu core is overloaded, nor the disks, nor the network, so why so slow?
FYI, while importing a database the network is utilised at approx 2Mbit/s and the ndb nodes are writing to disk at about 1MB per second... hardly utilised. There is no swapping... the db is entirely in memory... no single core is maxed out by a process... no wait-state to note....
I've got two machines each with 4 quad core xeon cpus, 32GB ram. Between them they host a mysql cluster, the nodes are hosted with virtualbox and specs are as follows:
sql API * 2: 4GB ram 4 cores
sql NDB * 2: 19GB ram 8 cores
management node: 4GB 4 cores
Note: I run the NDB nodes using ndbmtd, the sql api nodes use the ndb-cluster-connection-pool=4 param.
Does anyone have any idea why its so slow? I'm simply unable to find a single bottleneck?!?
config.ini
[ndb_mgmd default]
DataDir=/var/lib/mysql-cluster
[ndb_mgmd]
HostName=mgm-alpha
NodeId=1
[TCP DEFAULT]
SendBufferMemory=12M
ReceiveBufferMemory=12M
[ndbd default]
NoOfReplicas=2
DataMemory=15000M
IndexMemory=1048M
MaxNoOfConcurrentOperations=100000
RedoBuffer=32M
MaxNoOfAttributes=5000
MaxNoOfOrderedIndexes=1000
TimeBetweenEpochs=500
DiskCheckpointSpeed=20M
DiskCheckpointSpeedInRestart=100M
MaxNoOfExecutionThreads=8
DataDir=/var/lib/mysql-cluster
[ndbd]
HostName=cl1-alpha
NodeId=2
[ndbd]
HostName=cl2-bravo
NodeId=3
[mysqld]
HostName=sq1-alpha
NodeId=4
[mysqld]
HostName=sq1-alpha
NodeId=5
[mysqld]
HostName=sq1-alpha
NodeId=6
[mysqld]
HostName=sq1-alpha
NodeId=7
[mysqld]
HostName=sq2-bravo
NodeId=8
[mysqld]
HostName=sq2-bravo
NodeId=9
[mysqld]
HostName=sq2-bravo
NodeId=10
[mysqld]
HostName=sq2-bravo
NodeId=11
my.cnf on mysql api nodes
[mysqld]
# Options for mysqld process:
ndbcluster
ndb-connectstring=mgm-alpha
default_storage_engine=ndbcluster
ndb-mgmd-host = mgm-alpha:1186
ndb-cluster-connection-pool=4
[mysql_cluster]
# Options for MySQL Cluster processes:
ndb-connectstring=mgm-alpha # location of management server
One reason is ndb cluster does not handle large transactions well. Some answers and tips can be found here:
http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-limitations-transactions.html
http://johanandersson.blogspot.co.nz/2012/04/mysql-cluster-how-to-load-it-with-data.html
I'm currently trying to setup a MySQL Cluster within my tomcat web application project. I've already properly set up my entire MySQL Cluster with my.cnf files sitting on both the Data Nodes and MySQL Nodes, and config.ini file sitting on the management node. All in all, I have 2 Management nodes, 2 Data Nodes, and 2 SQL Nodes.
This is the config.ini file:
#config.ini file
[ndb_mgmd]
NodeId=1
HostName=192.168.0.8
datadir=c:\my_cluster\ndb_data
[ndb_mgmd]
NodeId=2
HostName=192.168.0.2
datadir=c:\my_cluster\ndb_data
[ndbd default]
noofreplicas=2
datadir=c:\my_cluster\ndb_data
[ndbd]
hostname=192.168.0.1
NodeId=3
[ndbd]
hostname=192.168.0.6
NodeId=4
[mysqld]
hostname=192.168.0.2
[mysqld]
hostname=192.168.0.3
This is the my.cnf file
#my.cnf file
[mysqld]
ndbcluster
datadir=c:\\my_cluster\\mysqld_data
basedir=c:\\mysqlc
port=5000
ndb-connectstring=192.168.0.8,192.168.0.2
skip-name-resolve
[mysql_cluster]
ndb-connectstring=192.168.0.8,192.168.0.2
After setting up this entire cluster, the entire cluster setup works. However, when I made a simple insertion of data in my project web application when testing using the computer with this ip address, 192.168.0.6, the insertion did not take place at the data nodes' databases. Instead, insertion of data took place at the SQL Nodes' localhost databases.
Please advice me what should I do to ensure that insertion of new data goes to the data nodes' databases.
Just to check - what makes you believe that the data is stored locally in the MySQL server rather than in the data nodes?
When you created the tables, did you include the engine=ndb option? Check the output of SHOW CREATE TABLE to make sure.
Andrew.