I want to make Active-Active / Master-Master NDB cluster using 2 server.
1 Server is configured as NDB Manager with below config :
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=1 # Number of replicas
[ndb_mgmd]
# Management process options:
hostname=172.24.114.202 # Hostname of the manager
datadir=/var/lib/mysql-cluster # Directory for the log files
NodeId=1
[ndbd]
hostname=172.24.117.30 # Hostname/IP of the first data node
NodeId=2 # Node ID for this data node
datadir=/usr/local/mysql/data # Remote directory for the data files
[mysqld]
# SQL node options:
hostname=172.24.114.202 # In our case the MySQL server/client is on the same Droplet as the cluster manager
NodeId=3
[mysqld]
hostname=172.24.117.30
NodeId=4
172.24.114.20 >> Is the NDB Manager Server IP which also mysql Server installed on it.
172.24.117.30 >> Is the NDB Data Node and Also MySQL Server installed on it.
But, I got below error :
root#rstest1:/var/lib/mysql-cluster# service ndb_mgmd restart
root#rstest1:/var/lib/mysql-cluster# tail -f ndb_1_cluster.log
Node removed
2022-11-22 12:59:59 [MgmtSrvr] INFO -- Starting configuration change, generation: 3
2022-11-22 12:59:59 [MgmtSrvr] INFO -- Configuration 4 commited
2022-11-22 12:59:59 [MgmtSrvr] INFO -- Config change completed! New generation: 4
2022-11-22 12:59:59 [MgmtSrvr] INFO -- Node 1: Node 2 Connected
2022-11-22 13:00:00 [MgmtSrvr] INFO -- Node 2: Started arbitrator node 1 [ticket=067f000700524b50]
2022-11-22 13:00:00 [MgmtSrvr] WARNING -- Unable to allocate nodeid for API at 172.24.117.30. Returned error: 'No free node id found for mysqld(API).'
2022-11-22 13:00:03 [MgmtSrvr] WARNING -- Unable to allocate nodeid for API at 172.24.117.30. Returned error: 'No free node id found for mysqld(API).' - Repeated 2 times
2022-11-22 13:00:06 [MgmtSrvr] WARNING -- Unable to allocate nodeid for API at 172.24.117.30. Returned error: 'No free node id found for mysqld(API).' - Repeated 2 times
2022-11-22 13:00:09 [MgmtSrvr] WARNING -- Unable to allocate nodeid for API at 172.24.117.30. Returned error: 'No free node id found for mysqld(API).'
2022-11-22 13:00:12 [MgmtSrvr] WARNING -- Unable to allocate nodeid for API at 172.24.117.30. Returned error: 'No free node id found for mysqld(API).' - Repeated 2 times
2022-11-22 13:00:15 [MgmtSrvr] WARNING -- Unable to allocate nodeid for API at
Below is MySQLD config on Node 172.24.117.30
[mysqld]
# Options for mysqld process:
ndbcluster # run NDB storage engine
[mysql_cluster]
# Options for NDB Cluster processes:
ndb-connectstring=172.24.114.202 # location of management server
Also, below is NDB data node config on the same node
[mysql_cluster]
# Options for NDB Cluster processes:
ndb-connectstring=172.24.114.202 # location of cluster manager
MySQLd config on First node (172.24.114.202)
[mysqld]
# Options for mysqld process:
ndbcluster # run NDB storage engine
[mysql_cluster]
# Options for NDB Cluster processes:
ndb-connectstring=172.24.114.202 #
I am not sure what's wrong with the configuration, also I have set nodeid for every node.
Related
Host1:
ip: 42.a1.b1.c1 (in configuration file, a1,b1,c1 will be replaced by real values.)
mysql-cluster-community-server, mysql-cluster-community-management-server and mysql-cluster-community-data-node are all installed on this host.
Host2:
ip: 119.a2.b2.c2 (in configuration file, a2,b2,c2 will be replaced by real values.)
Both mysql-cluster-community-server and mysql-cluster-community-data-node are installed on this host.
All mysql-cluster-* softwares on both hosts didn't start.
cat /var/lib/mysql-cluster/config.ini on host1 outputs:
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=1 # Number of fragment replicas
DataMemory=80M # How much memory to allocate for data storage
IndexMemory=18M # How much memory to allocate for index storage
# For DataMemory and IndexMemory, we have used the
# default values. Since the "world" database takes up
# only about 500KB, this should be more than enough for
# this example NDB Cluster setup.
# NOTE: IndexMemory is deprecated in NDB 7.6 and later; in
# these versions, resources for all data and indexes are
# allocated by DataMemory and any that are set for IndexMemory
# are added to the DataMemory resource pool
ServerPort=2202 # This the default value; however, you can use any
# port that is free for all the hosts in the cluster
# Note1: It is recommended that you do not specify the port
# number at all and simply allow the default value to be used
# instead
# Note2: The port was formerly specified using the PortNumber
# TCP parameter; this parameter is no longer available in NDB
# Cluster 7.5.
[ndb_mgmd]
# Management process options:
HostName=42.a1.b1.c1 # Hostname or IP address of management node
DataDir=/var/lib/mysql-cluster # Directory for management node log files
#[ndbd]
#Options for data node "A":
# (one [ndbd] section per data node)
#HostName=42.a1.b1.c1 # Hostname or IP address
#NodeId=2 # Node ID for this data node
#DataDir=/usr/local/mysql/data # Directory for this data node's data files
[ndbd]
#Options for data node "B":
HostName=119.a2.b2.c2 # Hostname or IP address
NodeId=3 # Node ID for this data node
DataDir=/usr/local/mysql/data # Directory for this data node's data files
[mysqld]
#SQL node options:
HostName=119.a2.b2.c2 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
ndb_mgmd -f /var/lib/mysql-cluster/config.ini --ndb-nodeid=1 on host1 outpus:
MySQL Cluster Management Server mysql-5.7.32 ndb-7.6.16
2020-11-07 16:35:20 [MgmtSrvr] WARNING -- at line 5: [DB] IndexMemory is deprecated, will use Number bytes on each ndbd(DB) node allocated for storing indexes instead
2020-11-07 16:35:20 [MgmtSrvr] ERROR -- The hostname this node should have according to the configuration does not match a local interface. Attempt to bind '42.a1.b1.c1' failed with error: 99 'Cannot assign requested address'
why did the error "The hostname this node should have according to the configuration does not match a local interface" occur?
thanks a lot.
Happen because /etc/hosts has not been loaded in ram at boot time when your ndb service load.
i do resolve this on my case just adding 5 sec pause into the service script.
detail:
[Service]
ExecStartPre=-/bin/sleep 5
OS and mysql cluster version
OS: Linux centos7
mysql cluster: mysql-cluster-community-7.5.8-1.el7.x86_64
Server list
192.168.1.101 ndbd node1
192.168.1.102 ndbd node2
192.168.1.103 ndb_mgmd
192.168.1.104 mysql(api) node1
192.168.1.105 mysql(api) node2
The two data node(ndbd) is OK,but sql node(mysql) cannot connect to ndb_mgmd.The network is OK. SElinux and firewall is disabled.
My config
mgmd config(/var/lib/mysql-cluster/config.ini)
[ndbd default]
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M
ServerPort=2202
[ndb_mgmd]
HostName=192.168.1.103
DataDir=/var/lib/mysql-cluster
[ndbd]
HostName=192.168.1.101
NodeId=2
DataDir=/opt/mysql/data
[ndbd]
HostName=192.168.1.102
NodeId=3
DataDir=/opt/mysql/data
[mysqld]
HostName=192.168.1.104
mysql config(/etc/config.ini)
[mysqld]
user=mysql
ndbcluster
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
default-storage-engine=NDBCLUSTER
ndb-connectstring=192.168.1.103
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[mysql_cluster]
ndb-connectstring=192.168.1.103
Error log
mgmd log
2017-11-27 20:28:35 [MgmtSrvr] WARNING -- Failed to allocate nodeid for API at 192.168.1.104. Returned error: 'No free node id found for mysqld(API).'
2017-11-27 20:28:36 [MgmtSrvr] INFO -- Alloc node id 4 failed, no new president yet
sql node log(mysqld)
2017-11-28T03:18:43.565265Z 4 [Warning] NDB: Failed to acquire global schema lock, error: (4009)Cluster Failure
2017-11-28T03:18:43.566664Z 4 [Warning] NDB: Failed to acquire global schema lock, error: (4009)Cluster Failure
1) You started the management server fromt he wrong node, it is started
from 192.168.1.104, but in config it says it should be in
192.168.1.103.
2) You are missing a mysqld section for the second MySQL server at
192.168.1.105
Some recommendations:
1) Assign node ids to all nodes
2) Add an API node as well to enable running some NDB tools as well
I am trying to deploy a mysql cluster (4 machines) with 1 node manager, 1 sql node, and 2 data nodes. I am following these tutorials which are complimentary (first part, second part, third part, fourth part) from the official mysql website. However I have a problem with the SQL Node which is always not connected, as you can see here in the node manager:
$ sudo ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 #10.31.35.40 (mysql-5.6.23 ndb-7.4.5, starting, Nodegroup: 0)
id=3 #10.31.35.42 (mysql-5.6.23 ndb-7.4.5, starting, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 #10.31.37.108 (mysql-5.6.23 ndb-7.4.5)
[mysqld(API)] 1 node(s)
id=4 (not connected, accepting connect from 10.31.35.41)
I don't know why the SQL node is not connected to the management node !! I looked for similar problems in google but I still can't resolve my problem !
I tried several times to do:
/etc/init.d/mysql.server stop
and
/etc/init.d/mysql.server start
but in vain.
Here is also the output of mysqld in verbose mode which mentions a problem I don't know what its cause:
ubuntu#10-31-35-41:/usr/local/mysql/bin$ sudo mysqld --verbose --help
150404 5:26:00 [Note] Plugin 'FEDERATED' is disabled.
150404 5:26:00 [ERROR] mysqld: unknown option '--ndbcluster'
......
Also when I try to see the location of the mysqld_safe (which I think is not normal since as you can see in the first part of the tutorial the folder was put in /usr/local/mysql (and I am supposed to use the mysql.server, isn't it ?)), I have
$ which mysqld_safe
/usr/bin/mysqld_safe
Moreover, I don't know if there is a conflict with the previous installed package of mysql
This is /etc/mysql/my.cnf (in the sql node, which is the same in the working data nodes):
[mysqld]
# Options for mysqld process:
ndbcluster # run NDB storage engine
[mysql_cluster]
# Options for MySQL Cluster processes:
ndb-connectstring=10.31.37.108 # location of management server
This is the config.ini file in the ndb_mgm:
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2 # Number of replicas
DataMemory=80M # How much memory to allocate for data storage
IndexMemory=18M # How much memory to allocate for index storage
# For DataMemory and IndexMemory, we have used the
# default values. Since the "world" database takes up
# only about 500KB, this should be more than enough for
# this example Cluster setup.
[tcp default]
# TCP/IP options:
portnumber=2202 # This the default; however, you can use any
# port that is free for all the hosts in the cluster
# Note: It is recommended that you do not specify the port
# number at all and simply allow the default value to be used
# instead
[ndb_mgmd]
# Management process options:
hostname=10.31.37.108 # Hostname or IP address of MGM node
datadir=/var/lib/mysql-cluster # Directory for MGM node log files
[ndbd]
# Options for data node "A":
# (one [ndbd] section per data node)
hostname=10.31.35.40 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files
[ndbd]
# Options for data node "B":
hostname=10.31.35.42 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files
[mysqld]
# SQL node options:
hostname=10.31.35.41 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
Thank you very much in advance for your help !!
To resolve the problem I changed the permissions on the /usr/local/mysql/data directory as follows:
sudo chown -R ubuntu data/
so now instead of:
$ /usr/local/mysql/support-files/mysql.server restart
* MySQL server PID file could not be found!
Starting MySQL
. * The server quit without updating PID file (/usr/local/mysql/data/ip-172-31-46-103.pid).
I get now:
$ /usr/local/mysql/support-files/mysql.server restart
Shutting down MySQL
.. *
Starting MySQL
. *
I am trying to configure a mysql cluster in CentOS but i had some issues I dont know how to solve and I would really appreciate some help.
The mysql cluster environment:
DB1 - 192.168.50.101 - Management Server (MGM) node.
DB2 - 192.168.50.102 - Storage Server (NDBD) node 1.
DB3 - 192.168.50.103 - Storage Server (NDBD) node 2.
The steps I followed to configure the whole cluster:
Configure the Management Server node (192.168.50.101)
1.1 Install mysql server and start it:
# yum install mysql mysql-server
# chkconfig --levels 235 mysqld on
# /etc/init.d/mysqld start
1.2 Install cluster packages:
# rpm -ivh MySQL-ndb-management-5.0.90-1.glibc23.i386.rpm
# rpm -ivh MySQL-ndb-tools-5.0.90-1.glibc23.i386.rpm
1.3 Create cluster directory and the config.ini file
# mkdir /var/lib/mysql-cluster
# cd /var/lib/mysql-cluster
# vi config.ini
1.4 write the cluster config content in the config.ini
[NDBD DEFAULT]
NoOfReplicas=2
DataMemory=80M # How much memory to allocate for data storage
IndexMemory=18M # How much memory to allocate for index storage
# For DataMemory and IndexMemory, we have used the
# default values. Since the .world. database takes up
# only about 500KB, this should be more than enough for
# this example Cluster setup.
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Management Section (MGM)
[NDB_MGMD]
#NodeId = 1
# IP address of the management node
HostName=192.168.50.101
# Storage Server Section (NDBD)
[NDBD]
#NodeId = 2
# IP address of the Storage Server (NDBD) node 1
HostName=192.168.50.102
DataDir=/var/lib/mysql
BackupDataDir=/var/lib/backup
DataMemory=100M
[NDBD]
#NodeId = 3
# IP address of the Storage Server (NDBD) node 2
HostName=192.168.50.103
DataDir=/var/lib/mysql
BackupDataDir=/var/lib/backup
DataMemory=100M
# one [MYSQLD] per storage node
# 2 Clients MySQL
[MYSQLD]
#NodeId = 5
[MYSQLD]
#NodeId = 6
1.5 Start the Management Service
# ndb_mgmd
1.6 Enter to the admin console
# ndb_mgm
1.7 Use the command SHOW to check the nodes status
ndb_mgm> show
Connected to Managemente Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 nodes
id=2 (not connected, accepting connect from 192.168.50.102)
id=3 (not connected, accepting connect from 192.168.50.103)
[ndb_mgmd(MGM)] 1 node
id=1 #192.168.50.101 (Version: 5.0.95)
[mysqld(API)] 2 nodes
id=5 (not connected, accepting connect from any host)
id=6 (not connected, accepting connect from any host)
The management node configuration is OK, so let's configure one Storage Server node (192.168.50.102).
2.1 Install the mysql server, like the step 1.1 .
2.2 Download the MYSQL Cluster from "http://dev.mysql.com/downloads/cluster/"
2.3 Extract the content and copy the files ndb to /usr/bin/.
2.4 Connect the Storage Server node to the Management Server.
ndbd --connect-string=192.168.50.101 --initial -n
And here is the problem. In the Management Server, the next error is displayed:
ndb_mgm > Node 2: Forced node shutdown completed. Ocurred during startphase 0.
Caused by error 2350: 'Invalid configuration received from Management
Server(Configuration error). Permanent error, external action needed'.
And in the Storage Server node, the displayed warning is:
[ndbd] INFO -- Angel connected to '102.168.50.101:1186'
[ndbd] INFO -- Angel allocated nodeid: 2
[ndbd] WARNING -- Configuration didn't contain generation (likely old ndb_mgmd
Does someone know what I should do to fix the problem?
Thank you!
In case it helps somone else, I'll paste in here the response given on the MySQL Forum...
it looks like you're trying to mix management node binaries from your repository (very old version) with non-Cluster MySQL Server (not allowed) with data nodes from mysql.com (very new).
The first step should be to use binaries for all of the nodes from mysql.com.
If you'd like to try out the browser-driven auto-installer to make your life simpler then take a look at http://www.clusterdb.com/mysql-cluster/auto-installer-labs-release/ or if you'd like to set things up by hand then take a look at http://www.clusterdb.com/mysql-cluster/deploying-mysql-cluster-over-multiple-hosts/
Hello Andrew,
thank you very much for your reply. Indeed, I was using an old mysql version in the mgm node.
I downloaded all from http://www.mysql.com/downloads/cluster/ ,set every node like I said before and connected the data node to the manage node using:
shell> /usr/local/mysql/bin/ndbd --connect-string=192.168.56.101
-- Angel connected to 192.168.56.101:1186
-- Angel allocated nodeid: 2
Also, i checked the manage node using the command show:
ndb_mbm> show
Cluster Configuration
[ndbd(NDB)] 2 nodes
id=2 #192.168.50.102(mysql-5-5.29 ndb-7.2.10, starting, Nodegroup:0)
id=3 (not connected, accepting connect from 192.168.50.103)
[ndb_mgmd(MGM)] 1 node
id=1 #192.168.50.101 (Version: 5.0.95)
[mysqld(API)] 2 nodes
id=5 (not connected, accepting connect from any host)
id=6 (not connected, accepting connect from any host)
As you can see, the data node (id 2) is connecting to the mgm node, but when i try to start the data node (id 2) mysql, it will not start...
shell> /etc/init.d/mysql start
Starting MySQL.................................The server quit without updating PID file (/usr/loca/mysql/data/localhost.node2-1. {FAILED])
I checked the problem, and it seems that mysql does not like the config I wrote in /etc/my.cnf.
At the beggining I had:
-- my.cnf --
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
And after I added the ndbcluster config:
-- my.cnf --
[client]
port = 3306
socket = /tmp/mysql.sock
[mysqld]
port = 3306
ndbcluster
ndb-connectstring=192.168.56.107
[mysqld_cluster]
ndb-connectstring=192.168.56.107
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
The thing is if I comment out the ndbluster part, mysql starts correctly, but if the ndbcluster line or the ndb-connectstring line is not commented, mysql does start. What should I do? I do not understand why the mysql does not start when it has the ndbcluster configuration. Is there something wrong?
I notice that you only have one of the two ndbd processes running (and it's still in the starting state). This will prevent the mysqld connecting to the cluster and so you need to start the second ndbd first and wait until ndb_mgm reports them both as being in the running state.
I also tried to connect first both nbdb, but they get stuck on the starting stage:
ndb_mgm> show
Cluster Configuration
[ndbd(NDB)] 2 nodes
id=2 #192.168.50.102(mysql-5-5.29 ndb-7.2.10, starting, Nodegroup:0)
id=3 #192.168.50.103(mysql-5-5.29 ndb-7.2.10, starting, Nodegroup:0)
[ndb_mgmd(MGM)] 1 node
id=1 #192.168.50.101 (mysql-5-5.29 ndb-7.2.10)
[mysqld(API)] 2 nodes
id=5 (not connected, accepting connect from any host)
id=6 (not connected, accepting connect from any host)
I checked the mgm log (ndb_l_cluster.log):
[MgmtSrvr] INFO -- Node 3: Initial start, waiting for 2 to connect, nodes [all: 2 and 3 connected: 3 no-wait:]
[MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [all: 2 and 3 connected: 3 no-wait:]
Even I tried to start them from the mgm:
ndb_mgm> 2 start
Database node 2 is being started.
ndb_mgm> 3 start
Database node 3 is being started.
But there is no "node 2 : Start initiated" message...
I am running the cluster in three virtual machines with CentOS 6.3. Is it the problem? Maybe the config file?
Normally this type of start up problem results from firewall rules blocking access to random high ports on another node in the cluster. Ndbd nodes use these to communicate with each other.
The solution is to either allow all connections between these hosts or to specific ports defined by ServerPort.
See: http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-ndbd-definition.html#ndbparam-ndbd-serverport
and
http://johanandersson.blogspot.com/2009/05/cluster-fails-to-start-self-diagnosis.html
Matthew, you were right! I allowed the ports between all nodes and all is working fine!
Thank you very much, Matthew and Andrew!
I am having trouble with the initial start up of the mysql-cluster management node and would appreciate any help I can get about this issue. See my two examples of failure below followed by my config.ini file. The first example shows the basic command to start the daemon and the error it produces. The second attempts to skip the process I believe caused the error in the first, but only results in a different error (one that I can find no solution to).
~$ ndb_mgmd -f /var/lib/mysql-cluster/config.ini
MySQL Cluster Management Server mysql - 5.5.25 ndb-7.2.7
2012-07-27 16:44:51 [MgmtSrvr] INFO -- The default config directory '/user/local
/mysql/mysql-cluster' does not exist. Trying to create it...
Failed to create directory '/usr/local/mysql/mysql-cluster', error: 2 2012-07-27 16:44:51
[MgmtSrvr] ERROR -- Could not create directory '/usr/local/mysql/mysql-cluster'.
Either create it manually or specify a different directory with --configdir=
~$ ndb_mgmd --skip-config-cache -f /var/lib/mysql-cluster/config.ini
MySQL Cluster Management Server mysql - 5.5.25 ndb-7.2.7
2012-07-27 16:44:51 [MgmtSrvr] INFO -- Skipping check of config directory since config cache is disabled.
Failed to parse parameters for log handler: 'FILE:filename=/var/lib/mysql-cluster/ndb_1_cluster.log,maxsize=1000000,
maxfiles=6', error:13 '(null)'
/var/lib/mysql-cluster/config.ini:
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2 # Number of replicas
DataMemory=80M # How much memory to allocate for data storage
IndexMemory=18M # How much memory to allocate for index storage
[tcp default]
# TCP/IP options:
[ndb_mgmd]
# Management process options:
hostname=192.168.0.3 # Hostname or IP address of MGM node
datadir=/var/lib/mysql-cluster # Directory for MGM node log files
NodeId=1
[ndbd]
# Options for data node-1:
# (one [ndbd] section per data node)
hostname=192.168.0.1 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files
NodeId=2
[ndbd]
# Options for data node-2:
hostname=192.168.0.2 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files
NodeId=3
[mysqld]
# SQL node options:
hostname=192.168.0.4 # Hostname
You should try running that commands with sudo:
~$ sudo ndb_mgmd -f /var/lib/mysql-cluster/config.ini
~$ sudo ndb_mgmd --skip-config-cache -f /var/lib/mysql-cluster/config.ini
Your problem seems to be with permissions, not configuration.