I have a mysql ndb cluster (details below). The problem is when I do the simplest thing such as restore a database that was dumped using mysqldump, it take an absolute age! IE 6 hours to restore a db that's 745MB in size and has approx 2.7 million rows across about 30 tables, all pretty standard stuff.
I've looked for bottlenecks, no 1 cpu core is overloaded, nor the disks, nor the network, so why so slow?
FYI, while importing a database the network is utilised at approx 2Mbit/s and the ndb nodes are writing to disk at about 1MB per second... hardly utilised. There is no swapping... the db is entirely in memory... no single core is maxed out by a process... no wait-state to note....
I've got two machines each with 4 quad core xeon cpus, 32GB ram. Between them they host a mysql cluster, the nodes are hosted with virtualbox and specs are as follows:
sql API * 2: 4GB ram 4 cores
sql NDB * 2: 19GB ram 8 cores
management node: 4GB 4 cores
Note: I run the NDB nodes using ndbmtd, the sql api nodes use the ndb-cluster-connection-pool=4 param.
Does anyone have any idea why its so slow? I'm simply unable to find a single bottleneck?!?
config.ini
[ndb_mgmd default]
DataDir=/var/lib/mysql-cluster
[ndb_mgmd]
HostName=mgm-alpha
NodeId=1
[TCP DEFAULT]
SendBufferMemory=12M
ReceiveBufferMemory=12M
[ndbd default]
NoOfReplicas=2
DataMemory=15000M
IndexMemory=1048M
MaxNoOfConcurrentOperations=100000
RedoBuffer=32M
MaxNoOfAttributes=5000
MaxNoOfOrderedIndexes=1000
TimeBetweenEpochs=500
DiskCheckpointSpeed=20M
DiskCheckpointSpeedInRestart=100M
MaxNoOfExecutionThreads=8
DataDir=/var/lib/mysql-cluster
[ndbd]
HostName=cl1-alpha
NodeId=2
[ndbd]
HostName=cl2-bravo
NodeId=3
[mysqld]
HostName=sq1-alpha
NodeId=4
[mysqld]
HostName=sq1-alpha
NodeId=5
[mysqld]
HostName=sq1-alpha
NodeId=6
[mysqld]
HostName=sq1-alpha
NodeId=7
[mysqld]
HostName=sq2-bravo
NodeId=8
[mysqld]
HostName=sq2-bravo
NodeId=9
[mysqld]
HostName=sq2-bravo
NodeId=10
[mysqld]
HostName=sq2-bravo
NodeId=11
my.cnf on mysql api nodes
[mysqld]
# Options for mysqld process:
ndbcluster
ndb-connectstring=mgm-alpha
default_storage_engine=ndbcluster
ndb-mgmd-host = mgm-alpha:1186
ndb-cluster-connection-pool=4
[mysql_cluster]
# Options for MySQL Cluster processes:
ndb-connectstring=mgm-alpha # location of management server
One reason is ndb cluster does not handle large transactions well. Some answers and tips can be found here:
http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-limitations-transactions.html
http://johanandersson.blogspot.co.nz/2012/04/mysql-cluster-how-to-load-it-with-data.html
Related
I have MySQL 5.6 (InnoDB) installed on Win 2008 server with 1Gb. mysqld uses 415Mb memory. How to reduce memory usage? I read that it is possible to do by configuring my.ini: key_buffer_size, innodb_buffer_pool_size, etc. Please, tell me the correct settings to minimize memory usage.
You can check Configuring MySQL to use minimal memory:
# /etc/my.cnf:
innodb_buffer_pool_size=5M
innodb_log_buffer_size=256K
query_cache_size=0
max_connections=10
key_buffer_size=8
thread_cache_size=0
host_cache_size=0
innodb_ft_cache_size=1600000
innodb_ft_total_cache_size=32000000
# per thread or per operation settings
thread_stack=131072
sort_buffer_size=32K
read_buffer_size=8200
read_rnd_buffer_size=8200
max_heap_table_size=16K
tmp_table_size=1K
bulk_insert_buffer_size=0
join_buffer_size=128
net_buffer_length=1K
innodb_sort_buffer_size=64K
#settings that relate to the binary log (if enabled)
binlog_cache_size=4K
binlog_stmt_cache_size=4K
I have a server (MS Windows Server 2012 R2 Datacenter 64GB RAM 2TB+ disk space) running mySQL 5.0. When I start the mySQL server, right off the bat it allocates 214,000 handles. Is that normal? I've been looking into this because I am trying to run an application that executes multiple unique queries over thousands of records and it is just crawling.
I have changed query_cache_size from 160M to 0M in the my.ini file as query caching will not benefit this application. Still no change in handles. I'm not sure what else I can do to fix this. Does anyone have any ideas?
The server is:
MySQL 5.0.60sp1-enterprise-gpl-nt
There are a ton of options. Here are what I think are the relevant ones (I could be wrong I am not an expert)
[mysqld]
default_storage_engine=InnoDB
innodb_file_per_table
innodb_flush_method=unbuffered
lower_case_table_names=2
max_allowed_packet=48M
max_heap_table_size=64777216
max_connections=3010
query_cache_size=0M
table_cache=6020
tmp_table_size=16M
thread_cache_size=64
myisam_max_sort_file_size=100G
myisam_max_extra_sort_file_size=100G
key_buffer_size=20M
read_buffer_size=64K
read_rnd_buffer_size=256K
innodb_additional_mem_pool_size=15M
innodb_flush_log_at_trx_commit=1
innodb_buffer_pool_size=709M
innodb_thread_concurrency=50
We are not able to create a stable configuration of a production server with MySQL and Tomcat Application Server. The MySQL throws very often an error:
MySQL: Out of memory (Needed 429496728 bytes)
It was a Windows Server 2012 with 64 GB of RAM. We see in the process tab, that the MySQLd commited 64 GB of RAM (that means all the available RAM in the server).
For your explaination, the Application uses completly inMemory based Tables in the MySQL Server.
See the my.ini Configuration
[client]
port=3306
[mysql]
default-character-set=utf8
[mysqld]
port=3306
basedir="F:/MySQL Server 5.6/"
datadir="F:/MySQL Server 5.6/data/"
character-set-server=utf8
default-storage-engine=MyISAM
sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
max_connections=200
query_cache_size=0
# table_cache=256
tmp_table_size=6G
max_heap_table_size=6G
max_tmp_tables=2048
open_files_limit=40000
thread_cache_size=8
myisam_max_sort_file_size=100G
myisam_sort_buffer_size=32M
myisam_use_mmap=0
concurrent_insert=2
key_buffer_size=2G
read_buffer_size=512M
read_rnd_buffer_size=512M
sort_buffer_size=128M
bulk_insert_buffer_size=32M
#skip-innodb
innodb_additional_mem_pool_size=15M
innodb_flush_log_at_trx_commit=1
innodb_log_buffer_size=7M
innodb_buffer_pool_size=686M
innodb_log_file_size=343M
innodb_thread_concurrency=34
slow_query_log=1
log-queries-not-using-indexes=1
long_query_time=1
log-output=FILE,TABLE
slow_query_log_file="F:/MySQL Server 5.6/localhost-slow-query.log"
event-scheduler=ON
Does anybody have a suitable solution to get this fixed?
We also use the MySQL Tuner to get informations, here the results:
What I cannot attach is the MySQL Tuner results as image (too few reputations). But look here.
Up for 39days
Data in Memory Tables 42G (Tables:281)
Data in MyISAM tables 31G (Tables 668)
total buffers: 2.3G global + 144.5M per Thread (200 max threads)
maximum possible memory usage: 30.5G (47% of installed RAM)
Key buffer size /total MyISAM indixes: 640M/7.0G
query cache disabled
joins performed without indixes: 337737
I have 2 computer running on Window 7 and I wanted to try MySQL database replication.
Configuration are as followed.
2 mgt node
2 data node
2 mysqlid node
I have configure the file config.ini
[ndbd default]
noofreplicas=2
[ndbd]
hostname=192.168.0.1
NodeId=1
[ndbd]
hostname=192.168.0.2
NodeId=2
[ndb_mgmd]
hostname=192.168.0.1
NodeId=101
[ndb_mgmd]
hostname=192.168.0.2
NodeId=102
[mysqld]
NodeId=51
hostname=192.168.0.1
[mysqld]
NodeId=52
hostname=192.168.0.2
and the my.cnf(.1)
[mysqld]
ndb-nodeid=51
ndbcluster
datadir=c:\\Users\\Brian\\my_cluster\\mysqld_data
basedir=c:\\Users\\Brian\\mysqlc
port=5000
server-id=51
log-bin
and the my.cnf(.2)
[mysqld]
ndb-nodeid=52
ndbcluster
datadir=c:\\Users\\Brian\\my_cluster\\mysqld_data
basedir=c:\\Users\\Brian\\mysqlc
port=5000
server-id=52
log-bin
however, when i login to the database of the node(master) and created a new database and inserted a new table, i realised that the other node is not able to sync it.
Could someone help me address this issue?
Brian,
first of all, just to check terminology - it appears that you're trying to use "MySQL Cluster" (where the data is held - and synchronously replicated - in the data nodes rather than in the mysqlds) rather "MySQL Replication" (which asynchronously replicates data between mysqlds).
I'll assume that the mysqlds are the ones that came with the MySQL Cluster package ('regular' mysqlds will not work correctly).
In order for the tables to be stored in the data nodes (and hence be visible through all mysqlds) rather than in the local mysqld, you must specify that MySQL Cluster is the storage engine to be used: "CREATE TABLE mytab (....) ENGINE=NDBCLUSTER;". If you don't do that then the tables will be created with the InnoDB storage engine and will be local to each mysqld.
Note that if you want your MySQL Cluster to be completely fault tolerant then you should move the ndb_mgmds to their own host(s) rather than co-locating them with data nodes; the reason for this is explained in MySQL Cluster fault tolerance – impact of deployment decisions.
Andrew.
I'm currently trying to setup a MySQL Cluster within my tomcat web application project. I've already properly set up my entire MySQL Cluster with my.cnf files sitting on both the Data Nodes and MySQL Nodes, and config.ini file sitting on the management node. All in all, I have 2 Management nodes, 2 Data Nodes, and 2 SQL Nodes.
This is the config.ini file:
#config.ini file
[ndb_mgmd]
NodeId=1
HostName=192.168.0.8
datadir=c:\my_cluster\ndb_data
[ndb_mgmd]
NodeId=2
HostName=192.168.0.2
datadir=c:\my_cluster\ndb_data
[ndbd default]
noofreplicas=2
datadir=c:\my_cluster\ndb_data
[ndbd]
hostname=192.168.0.1
NodeId=3
[ndbd]
hostname=192.168.0.6
NodeId=4
[mysqld]
hostname=192.168.0.2
[mysqld]
hostname=192.168.0.3
This is the my.cnf file
#my.cnf file
[mysqld]
ndbcluster
datadir=c:\\my_cluster\\mysqld_data
basedir=c:\\mysqlc
port=5000
ndb-connectstring=192.168.0.8,192.168.0.2
skip-name-resolve
[mysql_cluster]
ndb-connectstring=192.168.0.8,192.168.0.2
After setting up this entire cluster, the entire cluster setup works. However, when I made a simple insertion of data in my project web application when testing using the computer with this ip address, 192.168.0.6, the insertion did not take place at the data nodes' databases. Instead, insertion of data took place at the SQL Nodes' localhost databases.
Please advice me what should I do to ensure that insertion of new data goes to the data nodes' databases.
Just to check - what makes you believe that the data is stored locally in the MySQL server rather than in the data nodes?
When you created the tables, did you include the engine=ndb option? Check the output of SHOW CREATE TABLE to make sure.
Andrew.