mysql ndb_mgmd --no-nodeid-checks - Startup error - mysql

I am having trouble starting up ndb_mgmd. Here's some information.
OS: Ubuntu 12.04LTS
MySQL -V: Ver 5.5.25a-ndb-7.2.7 for Linux on x86_64 (Source distribution)
Base Dir = /usr/local/mysql
Default MySQL Conf = /xconf/mysql/my.cnf
Default MySQL Data = /xdata/mysql
/xconf/mysql/my.cnf
[mysqld]
ndbcluster
socket=/xdata/mysql/mysql.sock
[mysqld_safe]
err-log=/xlog/mysqld.log
pid-file=/xdata/runtime/mysqld/mysqld.pid
[ndb_mgmd]
configdir=/xdata/mysql-cluster
config-file=/xdata/mysql-cluster/config.ini
/xdata/mysql-cluster/config.ini
[NDBD DEFAULT]
NoOfReplicas=2
DataDir=/xdata/mysql-cluster
# Management Node
[ndb_mgmd]
NodeId=1
HostName=192.168.2.100
DataDir=/xdata/mysql-cluster
# Storage Nodes
[ndbd]
NodeId=2
HostName=192.168.2.101
[ndbd]
NodeId=3
HostName=182.168.2.102
# SQL Nodes
[mysqld]
HostName=192.168.2.100
[mysqld]
HostName=192.168.2.101
[mysqld]
HostName=192.168.2.102
When I execute:
#xuser:/xdata/mysql-cluster$ ndb_mgmd
MySQL Cluster Management Server mysql-5.5.25 ndb-7.2.7
[MgmtSrvr] ERROR -- Could not determine which nodeid to use for this node. Specify it with --ndb-nodeid=<nodeid> on command line
Any ideas as to why this is happening?

The problem was I wasn't connected to the network of the cluster.

Related

percona xtradb cluster mysql startup error

i have 2 nodes to prepare percona xtradb cluster , i have successfully installed the applications . then I tried to configure the file my.cnf
NODE1 :
cat >>/etc/my.cnf<<EOF
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_name=democluster
wsrep_cluster_address=gcomm://192.168.254.126,192.168.254.127
wsrep_node_name=centosvm02
wsrep_node_address=192.168.254.126
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=uertest:123abc#A
pxc_strict_mode=ENFORCING
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
EOF
NODE 2 :
cat >>/etc/my.cnf<<EOF
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_name=democluster
wsrep_cluster_address=gcomm://192.168.254.126,192.168.254.127
wsrep_node_name=centosvm02
wsrep_node_address=192.168.254.127
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=uertest:123abc#A
pxc_strict_mode=ENFORCING
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
EOF
after config is done i started systemctl start mysql#bootstrap but it failed. I discovered an error on startup
enter image description here
The error message is quite clear. The needed library cannot be found. If you installed PXC 8, then the library is galera4, not galera3. Make sure you installed all appropriate packages.

MySQL not starting because of galera config

I have mysql installed on Ubuntu machine and it is working fine. But as soon as I add the galera conf the mysql doesn't start.
Below is the command I am using.
root#Abhishek-Dev-D1:/usr/local# /etc/init.d/mysql start --wsrep-new-cluster
Starting MySQL
.... * The server quit without updating PID file (/var/run/mysqld/mysqld.pid).
Galera.conf
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
# Galera Cluster Configuration
wsrep_cluster_name="test_cluster"
wsrep_cluster_address="gcomm://172.31.254.196,172.31.254.197,172.31.254.198"
# Galera Synchronization Configuration
wsrep_sst_method=rsync
# Galera Node Configuration
wsrep_node_address="172.31.254.196"
wsrep_node_name="node1"
When I remove galera.conf from /etc/mysql/conf.d/galera.conf and then start mysql is starting properly.
When I installed it for the first time it was working fine but then during testing I issued a reboot command on one of the node. After that mysql doesn't booted up with galera.

Failed to allocate nodeid for API at <sql_node_ipaddr>. Returned error: 'No free node id found for mysqld(API).'

OS and mysql cluster version
OS: Linux centos7
mysql cluster: mysql-cluster-community-7.5.8-1.el7.x86_64
Server list
192.168.1.101 ndbd node1
192.168.1.102 ndbd node2
192.168.1.103 ndb_mgmd
192.168.1.104 mysql(api) node1
192.168.1.105 mysql(api) node2
The two data node(ndbd) is OK,but sql node(mysql) cannot connect to ndb_mgmd.The network is OK. SElinux and firewall is disabled.
My config
mgmd config(/var/lib/mysql-cluster/config.ini)
[ndbd default]
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M
ServerPort=2202
[ndb_mgmd]
HostName=192.168.1.103
DataDir=/var/lib/mysql-cluster
[ndbd]
HostName=192.168.1.101
NodeId=2
DataDir=/opt/mysql/data
[ndbd]
HostName=192.168.1.102
NodeId=3
DataDir=/opt/mysql/data
[mysqld]
HostName=192.168.1.104
mysql config(/etc/config.ini)
[mysqld]
user=mysql
ndbcluster
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
default-storage-engine=NDBCLUSTER
ndb-connectstring=192.168.1.103
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[mysql_cluster]
ndb-connectstring=192.168.1.103
Error log
mgmd log
2017-11-27 20:28:35 [MgmtSrvr] WARNING -- Failed to allocate nodeid for API at 192.168.1.104. Returned error: 'No free node id found for mysqld(API).'
2017-11-27 20:28:36 [MgmtSrvr] INFO -- Alloc node id 4 failed, no new president yet
sql node log(mysqld)
2017-11-28T03:18:43.565265Z 4 [Warning] NDB: Failed to acquire global schema lock, error: (4009)Cluster Failure
2017-11-28T03:18:43.566664Z 4 [Warning] NDB: Failed to acquire global schema lock, error: (4009)Cluster Failure
1) You started the management server fromt he wrong node, it is started
from 192.168.1.104, but in config it says it should be in
192.168.1.103.
2) You are missing a mysqld section for the second MySQL server at
192.168.1.105
Some recommendations:
1) Assign node ids to all nodes
2) Add an API node as well to enable running some NDB tools as well

Error starting mysql cluster management node (ndb_mgmd) on Ubuntu

I am having trouble with the initial start up of the mysql-cluster management node and would appreciate any help I can get about this issue. See my two examples of failure below followed by my config.ini file. The first example shows the basic command to start the daemon and the error it produces. The second attempts to skip the process I believe caused the error in the first, but only results in a different error (one that I can find no solution to).
~$ ndb_mgmd -f /var/lib/mysql-cluster/config.ini
MySQL Cluster Management Server mysql - 5.5.25 ndb-7.2.7
2012-07-27 16:44:51 [MgmtSrvr] INFO -- The default config directory '/user/local
/mysql/mysql-cluster' does not exist. Trying to create it...
Failed to create directory '/usr/local/mysql/mysql-cluster', error: 2 2012-07-27 16:44:51
[MgmtSrvr] ERROR -- Could not create directory '/usr/local/mysql/mysql-cluster'.
Either create it manually or specify a different directory with --configdir=
~$ ndb_mgmd --skip-config-cache -f /var/lib/mysql-cluster/config.ini
MySQL Cluster Management Server mysql - 5.5.25 ndb-7.2.7
2012-07-27 16:44:51 [MgmtSrvr] INFO -- Skipping check of config directory since config cache is disabled.
Failed to parse parameters for log handler: 'FILE:filename=/var/lib/mysql-cluster/ndb_1_cluster.log,maxsize=1000000,
maxfiles=6', error:13 '(null)'
/var/lib/mysql-cluster/config.ini:
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2 # Number of replicas
DataMemory=80M # How much memory to allocate for data storage
IndexMemory=18M # How much memory to allocate for index storage
[tcp default]
# TCP/IP options:
[ndb_mgmd]
# Management process options:
hostname=192.168.0.3 # Hostname or IP address of MGM node
datadir=/var/lib/mysql-cluster # Directory for MGM node log files
NodeId=1
[ndbd]
# Options for data node-1:
# (one [ndbd] section per data node)
hostname=192.168.0.1 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files
NodeId=2
[ndbd]
# Options for data node-2:
hostname=192.168.0.2 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node's data files
NodeId=3
[mysqld]
# SQL node options:
hostname=192.168.0.4 # Hostname
You should try running that commands with sudo:
~$ sudo ndb_mgmd -f /var/lib/mysql-cluster/config.ini
~$ sudo ndb_mgmd --skip-config-cache -f /var/lib/mysql-cluster/config.ini
Your problem seems to be with permissions, not configuration.

MySql Cluster - Management node wont start

Cant start the management node on MySQL Cluster.
I am issuing the ff command.
ndb_mgmd -f /var/lib/mysql-cluster/config.ini --initial
--configdir=/var/lib/mysql-cluster/ --ndb-nodeid=1
And I am getting the ff error:
MySQL Cluster Management Server mysql-5.5.22 ndb-7.2.6 2012-07-05
02:45:24 [MgmtSrvr] ERROR -- The hostname this node should have
according to theconfiguration does not match a local interface.
Attempt to bind '192.168.177.134' failed with error: 99 'Cannot assign
requested address'
config.ini
[ndbd default]
NoOfReplicas=2
[ndb_mgmd]
hostname=192.186.177.134
datadir=/var/lib/mysql-cluster
[ndbd]
hostname=192.168.177.132
datadir=/usr/local/mysql/data
[ndbd]
hostname=192.186.177.133
datadir=/usr/local/mysql/data
[mysqld]
hostname=192.168.177.131
In config.ini, sometimes you have 192.168 and sometimes 192.186. In particular:
[ndb_mgmd]
hostname=192.186.177.134
datadir=/var/lib/mysql-cluster
Should be:
[ndb_mgmd]
hostname=192.168.177.134
datadir=/var/lib/mysql-cluster
The hostname will then match that to which ndb_mgmd was attempting to bind, as described in the error message. You should also correct the hostname in the [ndbd] section.