I'm currently trying to setup a MySQL Cluster within my tomcat web application project. I've already properly set up my entire MySQL Cluster with my.cnf files sitting on both the Data Nodes and MySQL Nodes, and config.ini file sitting on the management node. All in all, I have 2 Management nodes, 2 Data Nodes, and 2 SQL Nodes.
This is the config.ini file:
#config.ini file
[ndb_mgmd]
NodeId=1
HostName=192.168.0.8
datadir=c:\my_cluster\ndb_data
[ndb_mgmd]
NodeId=2
HostName=192.168.0.2
datadir=c:\my_cluster\ndb_data
[ndbd default]
noofreplicas=2
datadir=c:\my_cluster\ndb_data
[ndbd]
hostname=192.168.0.1
NodeId=3
[ndbd]
hostname=192.168.0.6
NodeId=4
[mysqld]
hostname=192.168.0.2
[mysqld]
hostname=192.168.0.3
This is the my.cnf file
#my.cnf file
[mysqld]
ndbcluster
datadir=c:\\my_cluster\\mysqld_data
basedir=c:\\mysqlc
port=5000
ndb-connectstring=192.168.0.8,192.168.0.2
skip-name-resolve
[mysql_cluster]
ndb-connectstring=192.168.0.8,192.168.0.2
After setting up this entire cluster, the entire cluster setup works. However, when I made a simple insertion of data in my project web application when testing using the computer with this ip address, 192.168.0.6, the insertion did not take place at the data nodes' databases. Instead, insertion of data took place at the SQL Nodes' localhost databases.
Please advice me what should I do to ensure that insertion of new data goes to the data nodes' databases.
Just to check - what makes you believe that the data is stored locally in the MySQL server rather than in the data nodes?
When you created the tables, did you include the engine=ndb option? Check the output of SHOW CREATE TABLE to make sure.
Andrew.
Related
Hi I have an existing mysql master-master replication between 3 servers and all the servers are connected to each other. I'm replicating some tables in a specific database. there is about 80 tables in this database and only 10 of them is replicating. I'm using replicate-do-table option in my mysql config file to tell mysql which tables should be replicated.
In this stage I found that I need another table to be replicated which was not replicated before. When I add new table to mysql config file, Newly added data are coming to all the server but old existing data are not coming. what should I do to bring older data to all the servers.
this is my mysql config file
default_table_encryption=ON
table_encryption_privilege_check=ON
replica_parallel_workers=4
enforce_gtid_consistency=ON
gtid_mode=ON
keyring_file_data=.....
early-plugin-load=keyring_file.so
skip_replica_start=OFF
auto_increment_increment=50
auto_increment_offset=1
server_id=3
replicate_same_server_id=0
replicate-do-db=db
replicate-do-table=db.table1
replicate-do-table=db.table2
replicate-do-table=db.table3
replicate-do-table=db.table4
replicate-do-table=db.table5
replicate-do-table=db.table6
replicate-do-table=db.table7
replicate-do-table=db.table8
replicate-do-table=db.table9
replicate-do-table=db.table10
replicate-do-table=db.table11
replicate-do-table=db.table12
slave-skip-errors=1032
I have read that Maxscale(BinLog Server) of MariaDB can be used for relaying the bin logs from a MySQL Cluster to a single BinLog Server, however i wanted to know if its possible to collect all the bin logs from different MySQL Cluster and persist on a single BinLog Server and no mysql slaves would be reading from it. In case its possible how are the conflicts like same database name in different MySQL Cluster, etc are handled?
The binlogrouter module in MaxScale is a replication proxy. It stores the binary logs of the master on the MaxScale server. Unlike a normal replication slave, the binlogrouter will not write its own binary log. This means that the binary logs on the MaxScale server will be identical to those on the original master server.
To collect binlogs from multiple clusters, you need to configure multiple binlogrouter services and point each one of them to a master server. The binary logs are stored separately for each configured service.
Here's an example configuration of a binlogrouter service and a listener:
[Replication-Router]
type=service
router=binlogrouter
version_string=10.0.17-log
router_options=server_id=4000,binlogdir=/var/lib/maxscale/,filestem=mysql-bin
user=maxuser
passwd=maxpwd
[Replication-Listener]
type=listener
service=Replication-Router
protocol=MySQLClient
port=3306
Read the Binlogrouter documentation for more information about the various options.
I have a Storm cluster which consists of Nimbus and 4 Supervisors, and I have MySQL installed on the same node as Nimbus:
Cluster information
Nimbus - 192.168.0.1
Supervisors - 192.168.0.2 ~ 5
MySQL - same as the Nimbus, bind to 0.0.0.0 (so that I can connect remotely)
I am trying to update MySQL table in realtime, so if my bolt is running, say, on ...4 node, how does this node(bolt) sends data (update) to the MySQL server which is running on another node. In Hadoop, we have HDFS which is available on all nodes of a cluster, my question is Do I need some Distributed Storage for store tuples or I should make some configuration changes to my MySQL or Storm topology
You should be able to open a database connection from each node to your MySQL installation. The connection will go over the network, thus, you can update your DB remotely.
I have a mysql ndb cluster (details below). The problem is when I do the simplest thing such as restore a database that was dumped using mysqldump, it take an absolute age! IE 6 hours to restore a db that's 745MB in size and has approx 2.7 million rows across about 30 tables, all pretty standard stuff.
I've looked for bottlenecks, no 1 cpu core is overloaded, nor the disks, nor the network, so why so slow?
FYI, while importing a database the network is utilised at approx 2Mbit/s and the ndb nodes are writing to disk at about 1MB per second... hardly utilised. There is no swapping... the db is entirely in memory... no single core is maxed out by a process... no wait-state to note....
I've got two machines each with 4 quad core xeon cpus, 32GB ram. Between them they host a mysql cluster, the nodes are hosted with virtualbox and specs are as follows:
sql API * 2: 4GB ram 4 cores
sql NDB * 2: 19GB ram 8 cores
management node: 4GB 4 cores
Note: I run the NDB nodes using ndbmtd, the sql api nodes use the ndb-cluster-connection-pool=4 param.
Does anyone have any idea why its so slow? I'm simply unable to find a single bottleneck?!?
config.ini
[ndb_mgmd default]
DataDir=/var/lib/mysql-cluster
[ndb_mgmd]
HostName=mgm-alpha
NodeId=1
[TCP DEFAULT]
SendBufferMemory=12M
ReceiveBufferMemory=12M
[ndbd default]
NoOfReplicas=2
DataMemory=15000M
IndexMemory=1048M
MaxNoOfConcurrentOperations=100000
RedoBuffer=32M
MaxNoOfAttributes=5000
MaxNoOfOrderedIndexes=1000
TimeBetweenEpochs=500
DiskCheckpointSpeed=20M
DiskCheckpointSpeedInRestart=100M
MaxNoOfExecutionThreads=8
DataDir=/var/lib/mysql-cluster
[ndbd]
HostName=cl1-alpha
NodeId=2
[ndbd]
HostName=cl2-bravo
NodeId=3
[mysqld]
HostName=sq1-alpha
NodeId=4
[mysqld]
HostName=sq1-alpha
NodeId=5
[mysqld]
HostName=sq1-alpha
NodeId=6
[mysqld]
HostName=sq1-alpha
NodeId=7
[mysqld]
HostName=sq2-bravo
NodeId=8
[mysqld]
HostName=sq2-bravo
NodeId=9
[mysqld]
HostName=sq2-bravo
NodeId=10
[mysqld]
HostName=sq2-bravo
NodeId=11
my.cnf on mysql api nodes
[mysqld]
# Options for mysqld process:
ndbcluster
ndb-connectstring=mgm-alpha
default_storage_engine=ndbcluster
ndb-mgmd-host = mgm-alpha:1186
ndb-cluster-connection-pool=4
[mysql_cluster]
# Options for MySQL Cluster processes:
ndb-connectstring=mgm-alpha # location of management server
One reason is ndb cluster does not handle large transactions well. Some answers and tips can be found here:
http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-limitations-transactions.html
http://johanandersson.blogspot.co.nz/2012/04/mysql-cluster-how-to-load-it-with-data.html
I have 2 computer running on Window 7 and I wanted to try MySQL database replication.
Configuration are as followed.
2 mgt node
2 data node
2 mysqlid node
I have configure the file config.ini
[ndbd default]
noofreplicas=2
[ndbd]
hostname=192.168.0.1
NodeId=1
[ndbd]
hostname=192.168.0.2
NodeId=2
[ndb_mgmd]
hostname=192.168.0.1
NodeId=101
[ndb_mgmd]
hostname=192.168.0.2
NodeId=102
[mysqld]
NodeId=51
hostname=192.168.0.1
[mysqld]
NodeId=52
hostname=192.168.0.2
and the my.cnf(.1)
[mysqld]
ndb-nodeid=51
ndbcluster
datadir=c:\\Users\\Brian\\my_cluster\\mysqld_data
basedir=c:\\Users\\Brian\\mysqlc
port=5000
server-id=51
log-bin
and the my.cnf(.2)
[mysqld]
ndb-nodeid=52
ndbcluster
datadir=c:\\Users\\Brian\\my_cluster\\mysqld_data
basedir=c:\\Users\\Brian\\mysqlc
port=5000
server-id=52
log-bin
however, when i login to the database of the node(master) and created a new database and inserted a new table, i realised that the other node is not able to sync it.
Could someone help me address this issue?
Brian,
first of all, just to check terminology - it appears that you're trying to use "MySQL Cluster" (where the data is held - and synchronously replicated - in the data nodes rather than in the mysqlds) rather "MySQL Replication" (which asynchronously replicates data between mysqlds).
I'll assume that the mysqlds are the ones that came with the MySQL Cluster package ('regular' mysqlds will not work correctly).
In order for the tables to be stored in the data nodes (and hence be visible through all mysqlds) rather than in the local mysqld, you must specify that MySQL Cluster is the storage engine to be used: "CREATE TABLE mytab (....) ENGINE=NDBCLUSTER;". If you don't do that then the tables will be created with the InnoDB storage engine and will be local to each mysqld.
Note that if you want your MySQL Cluster to be completely fault tolerant then you should move the ndb_mgmds to their own host(s) rather than co-locating them with data nodes; the reason for this is explained in MySQL Cluster fault tolerance – impact of deployment decisions.
Andrew.